(-215:Assertion failed) 1 <= blobs.size() && blobs.size() <= 2 - opencv

I'm trying to import my own pre-trained Caffe Googlenet model using OpenCV v.3.4.3, where i run a Caffe test after training using the model deploy file and everything was working fine. However when feeding the OpenCv net (after loading it) with image blob i get an exception.
OpenCv Code:
Net net = Dnn.readNetFromCaffe("deploy.prototxt","bvlc_googlenet.caffemodel");
Mat image = Imgcodecs.imread(input.getAbsolutePath(), Imgcodecs.IMREAD_COLOR);
Mat blob = Dnn.blobFromImage(image);
System.out.println(image);
System.out.println(blob);
net.setInput(blob);
Mat result = net.forward().reshape(1);
Output Error:
Mat [ 24*15*CV_8UC3, isCont=true, isSubmat=false, nativeObj=0x1bcd0740, dataAddr=0x1a9d1880 ]
Mat [ -1*-1*CV_32FC1, isCont=true, isSubmat=false, nativeObj=0x1bcd0eb0, dataAddr=0x1a4e4340 ]
Exception in thread "main" CvException [org.opencv.core.CvException: cv::Exception: OpenCV(3.4.3) Z:\build tools\opencv-3.4.3\modules\dnn\src\layers\fully_connected_layer.cpp:73: error: (-215:Assertion failed) 1 <= blobs.size() && blobs.size() <= 2 in function 'cv::dnn::FullyConnectedLayerImpl::FullyConnectedLayerImpl'
]
at org.opencv.dnn.Net.forward_1(Native Method)
at org.opencv.dnn.Net.forward(Net.java:62)
at test.OpenCVTests.main(OpenCVTests.java:54)
Caffe-train-val-model.prototxt
Caffe-deploy-model.prototxt
Thanks in advance!

This issue was solved for me here:
https://github.com/opencv/opencv/issues/12578#issuecomment-422304736
"there is no loss3/classifier_retrain from Caffe-deploy-model.prototxt
in Caffe-train-val-model.prototxt. If you tried to run this model
several times in Caffe you'll get different outputs for the same input
because Caffe fills missed weights randomly."
Copyright (github.com/dkurt)

Related

machine learning+deep learning+speech recognition

I run the code in my editor (VS Code) without any problems, but for next step and due to RAM and GPU limitation, I took it in colab, but got an error that seems to be due to mismatch of versions due to transfer from my editor to colab. how can i fix this problem?
The current version of python running on Google Colab is 3.8.16, I used tensorflow 2.3.0 and keras 2.4.3.
The error is related to this part of code when use the model.fit() for train the model:
(I use CTC_loss in model):
model.fit(
train_dg,
validation_data=val_dg,
epochs=args.epochs,
callbacks=[PlotLossesKeras(),
early_stopping,
cp,
csv_logger,
lrs]
)
But I got this error:
----------------------------------------------------------------------------------------------------
**Epoch 00001: LearningRateScheduler reducing learning rate to 0.001. Epoch 1/300
-----------**---------------------------------------------------------------- InvalidArgumentError Traceback (most recent call last) <ipython-input-87-2b4ea6811b43> in <module>
----> 1 model.fit(train_dg,validation_data=val_dg,epochs=args.epochs,callbacks=[PlotLossesKeras(),early_stopping,cp,csv_logger,lrs])
9 frames /usr/local/lib/python3.8/dist-packages/tensorflow/python/eager/execute.py in quick_execute(op_name, num_outputs, inputs, attrs, ctx, name)
57 try:
58 ctx.ensure_initialized()
---> 59 tensors = pywrap_tfe.TFE_Py_Execute(ctx._handle, device_name, op_name,
60 inputs, attrs, num_outputs)
61 except core._NotOkStatusException as e:
InvalidArgumentError: Saw a non-null label (index >= num_classes - 1) following a null label, batch: 2 num_classes: 16 labels: 16,0,0,0,0,0,0 labels seen so far: [[node functional_3/CTCloss/CTCLoss (defined at <ipython-input-17-1689d20fc46d>:887) ]] [Op:__inference_train_function_6401]
Function call stack: train_function
---------------------------------------------------------------------------------------
I try change the version of python in colab but it dosent work.
also change num_classes in the last layer of my model, it dosent work too.

"Exception from Layer: 149" Problem with CoreML model after converting from PyTorch

I trained a YOLOv3-SPP model using PyTorch. I then saved it as an ONNX model and then converted it to CoreML using onnx-coreml. When I try to make a prediction using the model I get this error:
YOLOv3-CoreML[13481:1004975] [espresso] [Espresso::handle_ex_plan] exception=Espresso exception: "Invalid state": reshape mismatching size: 13 13 24 1 1 -> 6 10 8 3 1 [Exception from Layer: 149: 300]
2020-03-16 13:46:05.248612-0500 YOLOv3-CoreML[13481:1004975] [coreml] Error computing NN outputs -1
This is the code I am using to make the prediction:
if let prediction = try? model.prediction(input_1: image) {
print("Output: \(prediction)")
}
I did some digging to find layer #149. I used this script to find its name:
import coremltools
import numpy as np
mlmodel = coremltools.models.MLModel("model.mlmodel")
spec = mlmodel._spec
print(spec.neuralNetwork.layers[149])
I found its name to be "308". So I opened the model up in Netron and got this:
The layer in question is circled in red. How can I get my CoreML model to work properly?

Error using svm predict function in openCV when loading a saved file with svm load

I'm trying to load a .xml file using the SVM load function in OpenCV, and then use the predict function to classify a traffic sign. When reaching the execution of the predict function an error is thrown:
Unhandled exception at 0x00007FFE88E54008 in LicentaFunctii.exe: Microsoft C++ exception: cv::Exception at memory location 0x00000025658FD0C0.
And in the console the following message is logged:
OpenCV Error: Assertion failed (samples.cols == var_count && samples.type()== 5) in cv::ml::SVMImpl::predict, file C:\build\master_winpack-build-win64-
vc14\opencv\modules\ml\src\svm.cpp, line 2005
This are the first 24 lines in the xml lines:
<?xml version="1.0"?>
<opencv_storage>
<opencv_ml_svm>
<format>3</format>
<svmType>C_SVC</svmType>
<kernel>
<type>LINEAR</type></kernel>
<C>15.</C>
<term_criteria><epsilon>1.0000000000000000e-02</epsilon>
<iterations>1000</iterations></term_criteria>
<var_count>3600</var_count>
<class_count>7</class_count>
<class_labels type_id="opencv-matrix">
<rows>7</rows>
<cols>1</cols>
<dt>i</dt>
<data>
0 1 2 3 4 5 6</data></class_labels>
<sv_total>21</sv_total>
<support_vectors>
<_>
1.06024239e-02 4.48197760e-02 -4.58896300e-03 -2.43553445e-02
-7.37148002e-02 -1.85971316e-02 -1.32155744e-02 -1.38255786e-02
-3.20396386e-02 8.21578354e-02 7.99100101e-02 -1.21739525e-02
The following code is used to load the trained data from the xml file:
Ptr<SVM> svm = SVM::create();
svm->load("Images/trainedImages.xml");
Note: I'm using OpenCV 3.4.0 version.
Can anyone advise on this issue?
EDIT 1:
It seems that loading the trained file like this will work:
Ptr<SVM> svm = SVM::create();
svm = SVM::load("Images/trainedImages.xml");
It seems that loading the trained file like this will work:
Ptr<SVM> svm = SVM::create();
svm = SVM::load("Images/trainedImages.xml");

How to extract SIFT features from an image with normalized float values between 0 and 1?

I am using cv2 to compute SIFT features. The values of gray image is between 0 and 1 (continuous float values). The problem is that I am getting the following error if I don't save the type as uint8:
error: /io/opencv_contrib/modules/xfeatures2d/src/sift.cpp:1121: error: (-5) image is empty or has incorrect depth (!=CV_8U) in function detectAndCompute
after saving as uint8:
kp,des=sift.detectAndCompute(img,None)
plt.imshow(cv2.drawKeypoints(img, kp, out_img.copy()))
plt.show()
I am getting a complete blank image. Could someone please suggest a solution?
I could solve the problem by just bringing the floats values into 255 range. By the following commands:
data = data / data.max() #normalizes data in range 0 - 255
data = 255 * data
img = data.astype(np.uint8)

Opencv error Assertion failed while using CopyTo function

I have a vector of images and vector of descriptor values extracted using HOG descriptor in Opencv:
vector<Mat> images;
vector< vector < float> > v_descriptorsValues;
These vectors are previously initialized with proper images and values. The part of the code that causes an Opencv error :
Mat reData(images.size(), v_descriptorsValues[0].size(),true);
for (int i=0; i< images.size(); i++)
Mat(v_descriptorsValues[i]).copyTo(reData.row(i));
And the Opencv error i got:
OpenCV Error: Assertion failed (!fixedSize() || ((Mat*)obj)->size.operator()() == _sz) in unknown function, file ..\..\..\src\opencv\modules\core\src\matrix.cpp, line 1344
Actually in the last line of code i want to copy all the element of v_descriptorsValues to reData Mat.
Any idea that can solve the problem?

Resources