Liblinear Error - machine-learning

I am training my datasets in Liblinear: train heart_scale, but i am getting this error
can't open /home/linguistics/.train//train.ini
ERROR: Init file not found (/home/linguistics/.train//train.ini)
train>
Reference:
README file from Liblinear's source code. I downloaded it from here:
http://www.csie.ntu.edu.tw/~cjlin/cgi-bin/liblinear.cgi?+http://www.csie.ntu.edu.tw/~cjlin/liblinear+tar.gz

try ./train dataset_name instead of train dataset_name

Related

(-215:Assertion failed) inputs.size() in function 'cv::dnn::dnn4_v20211004::Layer::getMemoryShapes'

I am doing a text extraction on specific regions using Yolov5. I trained a model and convert it to onnx formate for OpenCV readable formate.. but when I load the model weight this error occurs.. all stack, GitHub issues are not able to solve my problem. I download graded my torch version according to Github resolved issues but my issue is still there.
Kindly if anyone has an idea about this error contact me plz.
I am very glad to receive your message.
Best regards. Error is below
[ERROR:0] global D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\onnx\onnx_importer.cpp (720) cv::dnn::dnn4_v20211004::ONNXImporter::handleNode DNN/ONNX: ERROR during processing node with 1 inputs and 1 outputs: [Identity]:(onnx::Resize_445)
Traceback (most recent call last):
File "C:\Users\Python_Coder\Desktop\YOLO_T_ID\Custom-OCR-with-YOLO\Custom_OCRs.py", line 176, in <module>
net = cv2.dnn.readNet(modelWeights)
cv2.error: OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\onnx\onnx_importer.cpp:739: error: (-2:Unspecified error) in function 'cv::dnn::dnn4_v20211004::ONNXImporter::handleNode'
> Node [Identity]:(onnx::Resize_445) parse error: OpenCV(4.5.4) D:\a\opencv-python\opencv-python\opencv\modules\dnn\src\dnn.cpp:5653: error: (-215:Assertion failed) inputs.size() in function 'cv::dnn::dnn4_v20211004::Layer::getMemoryShapes'
OpenCV error with PyTorch model loading

The OpenCV DNN face detection module can not work with Caffe C++ library

I download the caffe source code from Github and compile it as C++ static library, and I test the OpenCV face detection Caffe model with the static library, but the lib report below error:
[libprotobuf ERROR D:\ThirdPartyLibrary\protobuf\src\google\protobuf\text_format.cc:296] Error parsing text-format caffe.NetParameter: 984:14: Message type "caffe.LayerParameter" has no field named "norm_param".
F0328 02:08:05.225075 24332 upgrade_proto.cpp:88] Check failed: ReadProtoFromTextFile(param_file, param) Failed to parse NetParameter file: D:/DATA/PreTrainedModel/cv_facedet/deploy.prototxt
is it means that norm_param field is implement only by OpenCV, not a standard Caffe field?
norm_param is an argument of Normalization layer from SSD Caffe framework: https://github.com/weiliu89/caffe/blob/8a65ae316b34e7c8fdefa6e18bf08a23b78caa0e/src/caffe/proto/caffe.proto#L523
Origin repo: https://github.com/weiliu89/caffe/tree/ssd
There is no SSD object detection networks in the origin Caffe.
Probably, it has different name in Caffe: normalize_bbox_param. See this discussion.

loading Tensorflow model in Opencv 3.4.1 failed

i'm using opencv 3.4.1 DNN in java in order to load a "LeNet" model trained using keras and tensorflow in python. The model is saved as a tensorflow frozen model ".pb" where i'm using the following line of code to load the it:
Dnn cvDnn = new org.opencv.dnn.Dnn();
Net net = cvDnn.readNetFromTensorflow("C:\\Users\\kr\\Desktop\\Plate_Recognition_frozen.pb");
where the Error says:
OpenCV(3.4.1) Error: Unspecified error (Input layer not found: convolution2d_1_b_1) in cv::dnn::experimental_dnn_v4::`anonymous-namespace'::TFImporter::connect, file C:\build\master_winpack-bindings-win64-vc14-static\opencv\modules\dnn\src\tensorflow\tf_importer.cpp, line 553
Exception in thread "main" CvException [org.opencv.core.CvException: cv::Exception: OpenCV(3.4.1) C:\build\master_winpack-bindings-win64-vc14-static\opencv\modules\dnn\src\tensorflow\tf_importer.cpp:553: error: (-2) Input layer not found: convolution2d_1_b_1 in function cv::dnn::experimental_dnn_v4::`anonymous-namespace'::TFImporter::connect
]
at org.opencv.dnn.Dnn.readNetFromTensorflow_1(Native Method)
at org.opencv.dnn.Dnn.readNetFromTensorflow(Dnn.java:163)
at opencv.Main.main(Main.java:44)
Any help would be appreciated, thanks in advance.

CoreMLTools converted Keras Model fails at VNCoreMLTransform

as I'm learning Apple's Vision and CoreML framework but got stuck on how to use my own re-trained models. I tried training a VG16 model with Keras based on this tutorial. Everything looks OK except for some Keras version warnings. Then I tried converting the resulted model with CoreMLTools with the following code
coremlModel = coremltools.converters.keras.convert(
kmodel,
input_names = 'image',
image_input_names = 'image',
output_names = 'classLabelProbs',
class_labels = ['cats', 'dogs'],
)
During the conversion it gave me some version compatible warnings but otherwise it was successful:
WARNING:root:Keras version 2.0.6 detected. Last version known to be fully compatible of Keras is 2.0.4 .
WARNING:root:TensorFlow version 1.2.1 detected. Last version known to be fully compatible is 1.1.1 .
So I loaded this model into Apple's Vision+ML example code, but every time I tried to classify an image it failed with errors
Vision+ML Example[2090:2012481] Error: The VNCoreMLTransform request failed
Vision+ML Example[2090:2012481] Didn't get VNClassificationObservations
Error Domain=com.apple.vis Code=3 "The VNCoreMLTransform request failed" UserInfo={NSLocalizedDescription=The VNCoreMLTransform request failed, NSUnderlyingError=0x1c025d130 {Error Domain=com.apple.CoreML Code=0 "Dimensions of layer 'classLabelProbs' is not the same size as the number of class labels." UserInfo={NSLocalizedDescription=Dimensions of layer 'classLabelProbs' is not the same size as the number of class labels.}}}
I was guessing this is because the pre-trained VGG16 model has already had 1000 categories, so I tried 1000 categories and 1000 + 2 (cats and dogs) categories, but still got the same problem.
Did I miss anything? I'd greatly appreciate any clue and help.

opencv gpu cannot load feature type haar

I am trying to use opencv gpu based cascade classifier. Here is a snapshot where I try to load the cascade
gpu::CascadeClassifier_GPU cascade_gpu;
if(!cascade_gpu.load(HAARCASCADE_FRONTALFACE3))
return 0;
I am getting the following error when I try to load
Unspecified error (The node does not represent a user object (unknown type?)) in cvRead, file /Users/ashok/ivsvn/3rdparty/trunk/opencv-2.4.8/modules/core/src/persistence.cpp, line 4991
Here is the header of the cascade file.
<?xml version="1.0"?>
<opencv_storage>
<cascade>
<stageType>BOOST</stageType>
<featureType>HAAR</featureType>
<height>20</height>
<width>20</width>
<stageParams>
<boostType>GAB</boostType>
<minHitRate>9.9949997663497925e-01</minHitRate>
<maxFalseAlarm>3.4999999403953552e-01</maxFalseAlarm>
<weightTrimRate>9.4999999999999996e-01</weightTrimRate>
<maxDepth>1</maxDepth>
<maxWeakCount>1000</maxWeakCount></stageParams>
<featureParams>
<maxCatCount>0</maxCatCount>
<featSize>1</featSize>
<mode>ALL</mode></featureParams>
<stageNum>18</stageNum>
<stages> ...
I have a similar cascade file that is trained with
LBP that works with the gpu load.
Any clue on this issue?

Resources