Save tensorflow 2.0 model and use them in opencv 4 - opencv

I currently code my models with tensorflow 2.0 and I want to run them with opencv 4 (I want to compare performance). But I can't find a way to convert my tensorflow model for opencv.
For running in opencv I want to use:
cv2.dnn.readNetFromTensorflow('saved_model.pb', 'saved_model.pbtxt')
but when I save my model with:
model.save('./')
I obtain this files:
saved_model.pb | variables/variables.index | variables/variables.data-00000-of-00002 |variables/variables.data-00001-of-00002
I have a my .pb but not my .pbtxt. How it is possible to write this file ? According to opencv documentation this file is the text graph definition. I already try to write a .pbtxt with
model.to_json()
but it didn't work :/
Do you have any ideas ?
Thanks in advance !
Tanguy

Additionally, OpenCV requires an extra configuration file based on the
.pb, the .pbtxt. It is possible to import your own models and
generate your own .pbtxt files by using one of the following files
from the OpenCV Github repository...
Here is a link to tutorial: https://jeanvitor.com/tensorflow-object-detecion-opencv/
Haven't tried it myself, but seems legit.
For example tf_text_graph_ssd.py does job done.

Related

neo compilation job failed on Yolov5/v7 model

I was trying to use AWS SageMaker Neo compilation to convert a yolo model(trained with our custom data) to a coreml format, but got an error on input config:
ClientError: InputConfiguration: Unable to determine the type of the model, i.e. the source framework. Please provide the value of argument "source", from one of ["tensorflow", "pytorch", "mil"]. Note that model conversion requires the source package that generates the model. Please make sure you have the appropriate version of source package installed.
Seems Neo cannot recognize the Yolo model, is there any special requirements to the model in AWS SageMaker Neo?
I've tried both latest yolov7 model and yolov5 model, and both pt and pth file extensions, but still get the same error. Seems Neo cannot recognize the Yolo model. I also tried to downgrade pytorch version to 1.8, still not working.
But when I use the yolov4 model from this tutorial post: https://aws.amazon.com/de/blogs/machine-learning/speed-up-yolov4-inference-to-twice-as-fast-on-amazon-sagemaker/, it works fine.
Any idea if Neo compilation can work with Yolov7/v5 model?

How can I read pytorch model file via cv2.dnn.readNetFromTorch()?

I am able to save a PyTorch custom model? (it can work any PyTorch version above 1.0)
However, I am not able to read the saved model. I am trying to read it via cv2.dnn.readNetFromTorch() so as to use the model in Opencv framework (4.1.0).
I saved the PyTorch model with different methods as follows to see whether this difference reacts to the reading function of cv2.dnn.
torch.save(model.state_dict(), '/home/aktaseren/people-opencv/pidxx.pt')
torch.save(model.state_dict(), '/home/aktaseren/people-opencv/pidxx.t7')
torch.save(model, '/home/aktaseren/people-opencv/pidxx.t7')
torch.save(model, '/home/aktaseren/people-opencv/pidxx.pth')
None of these saved file can be readable via cv2.dnn.readNetFromTorch().
The error I am getting is always the same on this issue, which is below.
cv2.error: OpenCV(4.1.0) ../modules/dnn/src/torch/torch_importer.cpp:1022: error: (-213:The function/feature is not implemented) Unsupported Lua type in function 'readObject'
Do you have any idea how to solve this issue?
OpenCV documentation states can only read in torch7 framework format. There is no mention of .pt or .pth saved by pytorch.
This post mentions pytorch does not save as .t7.
.t7 was used in Torch7 and is not used in PyTorch. If I’m not mistaken
the file extension does not change the behavior of torch.save.
An alternate method is to export the model as onnx, then read the model in opencv using readNetFromONNX.
Yes, we have tested these methods (saving the model as .pt or .pth). And we couldn't load these model files by using opencv readNetFromTorch. We should use LibTorch or ONNX as the intermediate model file to be read from C++.

How do I convert .bin code property graph to json?

How can I convert a code property graph(cpg) obtained from joern (https://joern.io/) from .bin format to .json format for feeding it to a graph machine learning library for classification.
Note: CPG = AST + Control Flow Graph + Program Dependency Graph
Task: Machine Learning on Source Code.
You can use scala script 'graph-for-funcs.sc' which is included in the joern scripts directory. However you need to redirect the output in order to store it in file (since the output goes to stdout by default).
I made a custom script to do so.

how to extract topical key phrases using mallet

I have imported the file in mallet, now I want to model topic from the imported data and store them in a text file, from where I will be able to read those topics. Can anyone help in writing the commands for topic extraction, as I typed command below for topic extraction but it throws exception.
bin\mallet import-dir --input D:\Data\test1 --output test1.mallet --keep-sequence --remove-stopwords --extra-stopwords extra.txt
by removing --keep-sequence --remove-stopwords --extra-stopwords extra.txt i am able to import file after that, when I try to train model exception is thrown.
I recommend you to use GUI for mallet.
https://code.google.com/p/topic-modeling-tool/

how can I get the xml files of models for object detection?

I'm having a big troubles with the libraries that I have to use in my project .
whenever I tried one of the libraries , a problem appears and I don't have so much time to get lost for all this time :( my project is "Image Understanding"
so I need a "feature extraction" & "image segmentation " & "Machine learning"
after reading , it turned out the " SVM " is the best one
and I want some code to build mine on it and start off .
1- first I looked at "Aforge & Accord" and there was an example named "SupportVectorMachine" but it's not on images .
2- I found a great example in "EmguCV" named "LatentSvmDetector" and it detected any image of cat I tried it !! but the problem was in the xml file !
I just wanted to know how they got it ! and I couldn't find a simple answer
actually I asked you here and no body answers me :(
[link] How to extract features from image for classification and object recognition?
3- I found an example uses opencv here in this site
[link] http://www.di.ens.fr/~laptev/download.html
but the same problem : xml file ?!!!
I tried to take the xml file of this example and tried in the "EmguCV" example but it didn't work either .
4- in all the papers that I read they're using "ImageNet" & "VOC PASCAL" , I downloaded them and they're not working !! errors in the code of the tool !! and I've fixed them all
but yet they're not compailing , those tool are written in "Matlab"
here's my qusetion on this site :
[link] Matlab Mex32 link error while compiling Felzenszwalb VOC on Windows
for god sake can anybody tell me what should I do ?!
I'm running out of time , need your help !
thanks.
I'm not sure, because I never used SVM (but used haartraining) but I think that they have trained the detector using a program that outputs a xml file at the end of the training. I have made a quick search and found this link (opencv doc about svm training) and this link (a post with a example). I hope that it helps you and give some light.
MATLAB supports xml files - both reading and writing. Try:
xmlfile = fullfile(matlabroot, 'path/to/xml/file/myfile.xml');
xDoc = xmlread(xmlfile)
If you don't have xmlread function they you can try this toolbox: http://www.mathworks.com/matlabcentral/fileexchange/4278-xml-toolbox

Resources