Failed to parse NetParameter file in function 'ReadNetParamsFromBinaryFileOrDie' - image-processing

got stuck with this error while creating an dl model for image manipulation
error: OpenCV(4.6.0) /io/opencv/modules/dnn/src/caffe/caffe_io.cpp:1176: error: (-2:Unspecified error) FAILED: ReadProtoFromBinaryFile(param_file, param). Failed to parse NetParameter file: /content/res10_300x300_ssd_iter_140000.caffemodel in function 'ReadNetParamsFromBinaryFileOrDie'
Here's what i tried doing
Detect the face in the image using a pre-trained deep learning model
opencv_dnn_model = cv2.dnn.readNetFromCaffe(prototxt="/content/deploy.txt",caffeModel="/content/res10_300x300_ssd_iter_140000.caffemodel")
(h, w) = image.shape[:2]
blob = cv2.dnn.blobFromImage(cv2.resize(image, (300, 300)), 1.0, (300, 300), (104.0, 177.0, 123.0))
face_detector.setInput(blob)
detections = face_detector.forward()

Related

UnidentifiedImageError: cannot identify image file in CovidX dataset

I am using the CoViDx dataset. When I try to open images, sometimes it works well, sometimes it prints the error in the title: UnidentifiedImageError: cannot identify image file.
This is the code. My covid dataset class:
class CovidDataset(Dataset):
def __init__(self, dataset_df, transform=None):
self.dataset_df = dataset_df
self.transform = transform
def __len__(self):
return self.dataset_df.shape[0]
def __getitem__(self, idx):
image_name = self.dataset_df['filename'][idx]
img = Image.open(image_name)
label = self.dataset_df['class'][idx]
if self.transform:
img = self.transform(img)
return img, label
Then train_dataset = CovidDataset(train_df, transform=image_transforms['train']) and when I do train_dataset[0] it prints:
(tensor([[[-0.1433, -0.1312, -0.1072, ..., -1.2137, -1.2017, -1.2017],
[-0.1673, -0.1312, -0.1312, ..., -1.2017, -1.2137, -1.2137],
[-0.1673, -0.1433, -0.1312, ..., -1.1776, -1.2137, -1.2137],
...,
[-0.7687, -0.7807, -0.8048, ..., -0.6725, -0.6965, -0.6725],
[-0.7446, -0.7687, -0.7807, ..., -0.6604, -0.6965, -0.6845],
[-0.7446, -0.7567, -0.7927, ..., -0.6604, -0.6965, -0.6845]]]),
'negative')
Instead, if I do train_dataset[1] I have this error: UnidentifiedImageError: cannot identify image file 'train/sub-S03144_ses-E06258_run-1_bp-chest_vp-ap_dx-corrected.png'
So, going on, some images are opened, while some other images do not work. How can I fix this? I have already seen this post: link however, I have this error if I apply the script in the answer: NotADirectoryError: [Errno 20] Not a directory: 'train/37ae5f8b-8504-479e-bdbd-58dc6158f0f6.png'

cv2.blur() bad argument?

img = cv2.imread("C:....\\DogInCar.png", cv2.IMREAD_GRAYSCALE)
blur = cv2.blur(img, (5, 5), (-1, -1), cv2.BORDER_REFLECT)
And this error appeared.
cv2.error: OpenCV(4.5.4-dev) :-1: error: (-5:Bad argument) in function 'blur'
> Overload resolution failed:
> - Can't parse 'anchor'. Input argument doesn't provide sequence protocol
> - Can't parse 'anchor'. Input argument doesn't provide sequence protocol
I would like to know the reason for the error.
look at the signature (help is your friend !):
>>> help(cv2.blur)
Help on built-in function blur:
blur(...)
blur(src, ksize[, dst[, anchor[, borderType]]]) -> dst
. #brief Blurs an image using the normalized box filter.
...
since you skip 'dst', you have to specify the 'named args' after that explicitly:
blur = cv2.blur(img, (5, 5), anchor=(-1, -1), borderType=cv2.BORDER_REFLECT)

How to batch process with multiple Bounding Boxes in imgaug

I'm trying to set up a data augmentation pipline with imgaug. The transformation of the images works and does not throw any errors. In the second attempt I tried to transform the N Bounding Boxes for each image and I get a persistent error.
def image_batch_augmentation(batch_images, batch_bbox, batch_image_shape):
def create_BoundingBox(bbox):
return BoundingBox(bbox[0], bbox[1], bbox[2], bbox[3], bbox[4])
bbox = [[create_BoundingBox(bbox) for bbox in batch if sum(bbox) != 0]for batch in batch_bbox]
bbox = [BoundingBoxesOnImage(batch, shape=(h,w)) for batch, w, h in zip(bbox,batch_image_shape[0], batch_image_shape[1]) ]
seq_det = seq.to_deterministic()
aug_image = seq_det.augment_images(image.numpy())
aug_bbox = [seq_det.augment_bounding_boxes(batch) for batch in bbox]
return aug_image, aug_bbox
In the following line the following error occurs:
aug_bbox = seq_det.augment_bounding_boxes(bbox)
Exception has occurred: InvalidArgumentError
cannot compute Mul as input #1(zero-based) was expected to be a double tensor but is a int64 tensor [Op:Mul] name: mul/
I have already tried several different approaches but I can't get any further. Furthermore, I haven't found any information in the docs or other known platforms that would help me to get the code running.
The problem is constant, as can be seen from the error message on the data types. An adjustment of these has led to a success.
Here is the corresponding code that is actually running:
def image_batch_augmentation(batch_images, batch_bbox, batch_image_shape):
def create_BoundingBox(bbox, w, h):
return BoundingBox(bbox[0]*h, bbox[1]*w, bbox[2]*h, bbox[3]*w, tf.cast(bbox[4], tf.int32))
bbox = [[create_BoundingBox(bbox, float(w), float(h)) for bbox in batch if sum(bbox) != 0] for batch, w, h in zip(batch_bbox, batch_image_shape[0], batch_image_shape[1])]
bbox = [BoundingBoxesOnImage(batch, shape=(int(w),int(h))) for batch, w, h in zip(bbox,batch_image_shape[0], batch_image_shape[1]) ]
seq_det = seq.to_deterministic()
images_aug = seq_det.augment_images(image.numpy())
bbsoi_aug = seq_det.augment_bounding_boxes(bbox)
return images_aug, bbsoi_aug

Open CV error--Face Recognition on MAC

I have a facedetection training code. It gives me some issues and i have no clue why.
I am using a MAC and seems like there is missing something. Can you please advise what should i do?
Thank you in advance
OpenCV(3.4.1) Error: Assertion failed (!empty()) in detectMultiScale, file /tmp/opencv-20180426-73279-16a912g/opencv-3.4.1/modules/objdetect/src/cascadedetect.cpp, line 1698
Traceback (most recent call last):
File "/Users/Desktop/OpenCV-Python-Series-master/src/faces-train.py", line 36, in <module>
faces = face_cascade.detectMultiScale(image_array, scaleFactor=1.5, minNeighbors=5)
cv2.error: OpenCV(3.4.1) /tmp/opencv-20180426-73279-16a912g/opencv-3.4.1/modules/objdetect/src/cascadedetect.cpp:1698: error: (-215) !empty() in function detectMultiScale
[Finished in 0.421s]
And my code is below.
import cv2
import os
import numpy as np
from PIL import Image
import pickle
BASE_DIR = os.path.dirname(os.path.abspath(__file__))
image_dir = os.path.join(BASE_DIR, "images")
face_cascade = cv2.CascadeClassifier('cascades/data/haarcascade_frontalface_alt2.xml')
recognizer = cv2.face.LBPHFaceRecognizer_create()
current_id = 0
label_ids = {}
y_labels = []
x_train = []
for root, dirs, files in os.walk(image_dir):
for file in files:
if file.endswith("png") or file.endswith("jpg"):
path = os.path.join(root, file)
label = os.path.basename(root).replace(" ", "-").lower()
#print(label, path)
if not label in label_ids:
label_ids[label] = current_id
current_id += 1
id_ = label_ids[label]
#print(label_ids)
#y_labels.append(label) # some number
#x_train.append(path) # verify this image, turn into a NUMPY arrray, GRAY
pil_image = Image.open(path).convert("L") # grayscale
size = (550, 550)
final_image = pil_image.resize(size, Image.ANTIALIAS)
image_array = np.array(final_image, "uint8")
#print(image_array)
faces = face_cascade.detectMultiScale(image_array, scaleFactor=1.5, minNeighbors=5)
for (x,y,w,h) in faces:
roi = image_array[y:y+h, x:x+w]
x_train.append(roi)
y_labels.append(id_)
#print(y_labels)
#print(x_train)
with open("pickles/face-labels.pickle", 'wb') as f:
pickle.dump(label_ids, f)
recognizer.train(x_train, np.array(y_labels))
recognizer.save("recognizers/face-trainner.yml")
The assertion which fails indicates that your cascade is not loaded correctly. You can verify it by calling face_cascade.empty() just after the constructor. Please make sure that the path you provided ('cascades/data/haarcascade_frontalface_alt2.xml') is correct. When it points to a not existing file then there is no exception thrown by the constructor so you can easily miss it without calling empty() explicitly.

How to train an SVM with opencv based on a set of images?

I have a folder of positives and another of negatives images in JPG format, and I want to train an SVM based on that images, I've done the following but I receive an error:
Mat classes = new Mat();
Mat trainingData = new Mat();
Mat trainingImages = new Mat();
Mat trainingLabels = new Mat();
CvSVM clasificador;
for (File file : new File(path + "positives/").listFiles()) {
Mat img = Highgui.imread(file.getAbsolutePath());
img.reshape(1, 1);
trainingImages.push_back(img);
trainingLabels.push_back(Mat.ones(new Size(1, 1), CvType.CV_32FC1));
}
for (File file : new File(path + "negatives/").listFiles()) {
Mat img = Highgui.imread(file.getAbsolutePath());
img.reshape(1, 1);
trainingImages.push_back(img);
trainingLabels.push_back(Mat.zeros(new Size(1, 1), CvType.CV_32FC1));
}
trainingImages.copyTo(trainingData);
trainingData.convertTo(trainingData, CvType.CV_32FC1);
trainingLabels.copyTo(classes);
CvSVMParams params = new CvSVMParams();
params.set_kernel_type(CvSVM.LINEAR);
clasificador = new CvSVM(trainingData, classes, new Mat(), new Mat(), params);
When I try to run that I obtain:
OpenCV Error: Bad argument (train data must be floating-point matrix) in cvCheckTrainData, file ..\..\..\src\opencv\modules\ml\src\inner_functions.cpp, line 857
Exception in thread "main" CvException [org.opencv.core.CvException: ..\..\..\src\opencv\modules\ml\src\inner_functions.cpp:857: error: (-5) train data must be floating-point matrix in function cvCheckTrainData
]
at org.opencv.ml.CvSVM.CvSVM_1(Native Method)
at org.opencv.ml.CvSVM.<init>(CvSVM.java:80)
I can't manage to train the SVM, any idea? Thanks
Assuming that you know what you are doing by reshaping an image and using it to train SVM, the most probable cause of this is that your
Mat img = Highgui.imread(file.getAbsolutePath());
fails to actually read an image, generating a matrix img with null data property, which will eventually trigger the following in the OpenCV code:
// check parameter types and sizes
if( !CV_IS_MAT(train_data) || CV_MAT_TYPE(train_data->type) != CV_32FC1 )
CV_ERROR( CV_StsBadArg, "train data must be floating-point matrix" );
Basically train_data fails the first condition (being a valid matrix) rather than failing the second condition (being of type CV_32FC1).
In addition, even though reshape works on the *this object, it acts like a filter and its effect is not permanent. If it's used in a single statement without immediately being used or assigned to another variable it will be useless. Change the following lines in your code:
img.reshape(1, 1);
trainingImages.push_back(img);
to:
trainingImages.push_back(img.reshape(1, 1));
Just as the error says, You need to change type of Your matrix, from integer type, probably CV_8U, to floating point one, CV_32F or CV_64F. To do it You can use cv::Mat::convertTo(). Here is a bit about depths and types of matrices.

Resources