error loading faster_rcnn models into Opencv - opencv

I followed exactly every step from TensorFlow Object Detection API and trained the faster_rcnn_resnet50 model. Then I referenced link:Wiki to generate the pbtxt file for cv2 read net.
When I ran the model using opencv, it gave no error most of the time and this error sometimes:
cv2.error: OpenCV(3.4.3)
C:\projects\opencv-python\opencv\modules\dnn\src\tensorflow\tf_importer.cpp:495:
error: (-2:Unspecified error) Input layer not found:
CropAndResize/Reshape/shape in function
'cv::dnn::experimental_dnn_34_v7::anonymous-namespace'::TFImporter::connect'
I tried the optimized tool and re-generated the pbtxt file, but still experienced the same.
Any suggestions to make the model work? Thanks in advance!

Got answer from #dkurt at https://github.com/opencv/opencv/issues/13050
Wish that can help if you are experiencing the same problem

Related

neo compilation job failed on Yolov5/v7 model

I was trying to use AWS SageMaker Neo compilation to convert a yolo model(trained with our custom data) to a coreml format, but got an error on input config:
ClientError: InputConfiguration: Unable to determine the type of the model, i.e. the source framework. Please provide the value of argument "source", from one of ["tensorflow", "pytorch", "mil"]. Note that model conversion requires the source package that generates the model. Please make sure you have the appropriate version of source package installed.
Seems Neo cannot recognize the Yolo model, is there any special requirements to the model in AWS SageMaker Neo?
I've tried both latest yolov7 model and yolov5 model, and both pt and pth file extensions, but still get the same error. Seems Neo cannot recognize the Yolo model. I also tried to downgrade pytorch version to 1.8, still not working.
But when I use the yolov4 model from this tutorial post: https://aws.amazon.com/de/blogs/machine-learning/speed-up-yolov4-inference-to-twice-as-fast-on-amazon-sagemaker/, it works fine.
Any idea if Neo compilation can work with Yolov7/v5 model?

How can I read pytorch model file via cv2.dnn.readNetFromTorch()?

I am able to save a PyTorch custom model? (it can work any PyTorch version above 1.0)
However, I am not able to read the saved model. I am trying to read it via cv2.dnn.readNetFromTorch() so as to use the model in Opencv framework (4.1.0).
I saved the PyTorch model with different methods as follows to see whether this difference reacts to the reading function of cv2.dnn.
torch.save(model.state_dict(), '/home/aktaseren/people-opencv/pidxx.pt')
torch.save(model.state_dict(), '/home/aktaseren/people-opencv/pidxx.t7')
torch.save(model, '/home/aktaseren/people-opencv/pidxx.t7')
torch.save(model, '/home/aktaseren/people-opencv/pidxx.pth')
None of these saved file can be readable via cv2.dnn.readNetFromTorch().
The error I am getting is always the same on this issue, which is below.
cv2.error: OpenCV(4.1.0) ../modules/dnn/src/torch/torch_importer.cpp:1022: error: (-213:The function/feature is not implemented) Unsupported Lua type in function 'readObject'
Do you have any idea how to solve this issue?
OpenCV documentation states can only read in torch7 framework format. There is no mention of .pt or .pth saved by pytorch.
This post mentions pytorch does not save as .t7.
.t7 was used in Torch7 and is not used in PyTorch. If I’m not mistaken
the file extension does not change the behavior of torch.save.
An alternate method is to export the model as onnx, then read the model in opencv using readNetFromONNX.
Yes, we have tested these methods (saving the model as .pt or .pth). And we couldn't load these model files by using opencv readNetFromTorch. We should use LibTorch or ONNX as the intermediate model file to be read from C++.

How can I correct this error from testing of my mask rcnn model

Please I need solution with correcting and running this code for mask rcnn
As commented by Gtomika, please provide error trace in text format. Also provide more details about the repo where you got the model from and any other information that you think is relevant.
Based on my past experience, I'm pretty sure that you are using Matterplot's mask RCNN and your issue is due to class count mismatch. You are trying to load weight of a model that was trained on a different class count. You should exclude some layers such as 'mrcnn_bbox_fc','mrcnn_class_logits’.
Fix is to change model.load_weights(COCO_MODEL_PATH, by_name=True) to model.load_weights(filepath, by_name=True, exclude=[ "mrcnn_class_logits", "mrcnn_bbox_fc", "mrcnn_bbox", "mrcnn_mask"])
Refer this issue for more information.

MLMultiArray use of undeclared type error

I am attempting to set up the input for a coreML model, which takes as input an MLMultiArray. I have referenced solutions on this site for how to convert a Double array to a MLMultiArray. However, I am getting the "use of undeclared type" error for MLMultiArray.
screenshot of the error and "my" code
Alternatively, do you have any suggestions for how to instead set up a model to take a 1D Double array instead? My model is a SVR converted from sklearn.
I am using Xcode 9.2, but as soon as I post this I will update to 9.3 to see if that helps.
Thank you very much.
(also, don't worry--I have cited the relevant stack overflow sources in my header!)
#import CoreML
You need to import the framework in your class.

how can I get the xml files of models for object detection?

I'm having a big troubles with the libraries that I have to use in my project .
whenever I tried one of the libraries , a problem appears and I don't have so much time to get lost for all this time :( my project is "Image Understanding"
so I need a "feature extraction" & "image segmentation " & "Machine learning"
after reading , it turned out the " SVM " is the best one
and I want some code to build mine on it and start off .
1- first I looked at "Aforge & Accord" and there was an example named "SupportVectorMachine" but it's not on images .
2- I found a great example in "EmguCV" named "LatentSvmDetector" and it detected any image of cat I tried it !! but the problem was in the xml file !
I just wanted to know how they got it ! and I couldn't find a simple answer
actually I asked you here and no body answers me :(
[link] How to extract features from image for classification and object recognition?
3- I found an example uses opencv here in this site
[link] http://www.di.ens.fr/~laptev/download.html
but the same problem : xml file ?!!!
I tried to take the xml file of this example and tried in the "EmguCV" example but it didn't work either .
4- in all the papers that I read they're using "ImageNet" & "VOC PASCAL" , I downloaded them and they're not working !! errors in the code of the tool !! and I've fixed them all
but yet they're not compailing , those tool are written in "Matlab"
here's my qusetion on this site :
[link] Matlab Mex32 link error while compiling Felzenszwalb VOC on Windows
for god sake can anybody tell me what should I do ?!
I'm running out of time , need your help !
thanks.
I'm not sure, because I never used SVM (but used haartraining) but I think that they have trained the detector using a program that outputs a xml file at the end of the training. I have made a quick search and found this link (opencv doc about svm training) and this link (a post with a example). I hope that it helps you and give some light.
MATLAB supports xml files - both reading and writing. Try:
xmlfile = fullfile(matlabroot, 'path/to/xml/file/myfile.xml');
xDoc = xmlread(xmlfile)
If you don't have xmlread function they you can try this toolbox: http://www.mathworks.com/matlabcentral/fileexchange/4278-xml-toolbox

Resources