I trained a CNN model by using Torch (Lua) and then loaded it on OpenCV Java. The model was structured to get 112*112 as an input image. However I accidentally put 128*128 to the model.
I expected an error, but the model just ran smoothly and made some results. Why is it? Does OpenCV just ignore surplus parts of the input?
Below is a part of my code:
Mat bgrImage = bgrImages.get(i);
Mat inputBlob = Dnn.blobFromImage(bgrImage);
objectNet.setInput(inputBlob);
Mat fwdResultMat = objectNet.forward();
Related
Dislcaimer: I have never used openCV or openVINO or for the fact anything even close to ML before. However I've been slamming my head studying neural-networks(reading material online) because I've to work with intel's openVINO on an edge device.
Here's what the official documentation says about using openCV with openVINO(using openVINO's inference engine with openCV).
->Optimize the pretrained model with openVINO's model optimizer(creating the IR file pair)
use these IR files with
openCV's dnn.readnet() //this is where the inference engine gets set?
https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_raspbian.html
Tried digging more and found a third party reference. Here a difference approach is taken.
->Intermediatte files (bin/xml are not created. Instead caffe model file is used)
->the inference engine is defined explicitly with the following line
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
https://www.learnopencv.com/using-openvino-with-opencv/
Now I know to utilize openCV we have to use it's inference engine with pretrained models. I want to know which of the two approach is the correct(or preferred) one, and if rather I'm missing out no something.
You can get started using OpenVino from: https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_windows.html
You would require a set of pre-requsites to run your sample. OpenCV is your Computer Vision package which can used for Image processing.
Openvino inference requires you to convert any of your trained models(.caffemodel,.pb,etc.) to Intermediate representations(.xml,.bin) files.
For a better understanding and sample demos on OpenVino, watch the videos/subscribe to the OpenVino Youtube channel: https://www.youtube.com/channel/UCkN8KINLvP1rMkL4trkNgTg
If the topology that you are using is supported by OpenVino,the best way to use is the opencv that comes with openvino. For that you need to
1.Initialize the openvino environment by running the setupvars.bat in your openvino path(C:\Program Files (x86)\IntelSWTools\openvino\bin)
2.Generate the IR file (xml&bin)for your model using model optimizer.
3.Run using inference engine samples in the path /inference_engine_samples_build/
If the topology is not supported, then you can go for the other procedure that you mentioned.
The most common issues I ran into:
setupvars.bat must be run within the same terminal, or use os.environ["varname"] = varvalue
OpenCV needs to be built with support for the inference engines (ie DLDT). There are pre-built binaries here: https://github.com/opencv/opencv/wiki/Intel%27s-Deep-Learning-Inference-Engine-backend
Target inference engine: net.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
Target NCS2: net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)
The OpenCV pre-built binary located in the OpenVino directory already has IE support and is also an option.
Note that the Neural Compute Stick 2 AKA NCS2 (OpenVino IE/VPU/MYRIAD) requires FP16 model formats (float16). Also try to keep you image in this format to avoid conversion penalties. You can input images as any of these formats though: FP32, FP16, U8
I found this guide helpful: https://learnopencv.com/using-openvino-with-opencv/
Here's an example targetting the NCS2 from https://medium.com/sclable/intel-openvino-with-opencv-f5ad03363a38:
# Load the model.
net = cv2.dnn.readNet(ARCH_FPATH, MODEL_FPATH)
# Specify target device.
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD) # NCS 2
# Read an image.
print("Processing input image...")
img = cv2.imread(IMG_FPATH)
if img is None:
raise Exception(f'Image not found here: {IMG_FPATH}')
# Prepare input blob and perform inference
blob = cv2.dnn.blobFromImage(img, size=(672, 384), ddepth=cv2.CV_8U)
net.setInput(blob)
out = net.forward()
# Draw detected faces
for detect in out.reshape(-1, 7):
conf = float(detect[2])
xmin = int(detect[3] * frame.shape[1])
ymin = int(detect[4] * frame.shape[0])
xmax = int(detect[5] * frame.shape[1])
ymax = int(detect[6] * frame.shape[0])
if conf > CONF_THRESH:
cv2.rectangle(img, (xmin, ymin), (xmax, ymax), color=(0, 255, 0))
There are more samples here (jupyter notebook/python): https://github.com/sclable/openvino_opencv
I have an image with 8 channels.I have a conventional algorithm where weights are added to each of these channels to get an output as '0' or '1'.This works fine with several samples and complex scenarios. I would like implement the same in Machine Learning using CNN method.
I am new to ML and started looking out the tutorials which seem to be exclusively dealing with image processing problems- Hand writing recognition,Feature extraction etc.
http://cv-tricks.com/tensorflow-tutorial/training-convolutional-neural-network-for-image-classification/
https://leonardoaraujosantos.gitbooks.io/artificial-inteligence/content/neural_networks.html
I have setup the Keras with Theano as background.Basic Keras samples are working without problem.
What steps do I require to follow in order achieve the same result using CNN ? I do not comprehend the use of filters,kernels,stride in my use case.How do we provide Training data to Keras if the pixel channel values and output are in the below form?
Pixel#1 f(C1,C2...C8)=1
Pixel#2 f(C1,C2...C8)=1
Pixel#3 f(C1,C2...C8)=0 .
.
Pixel#N f(C1,C2...C8)=1
I think you should treat this the same way you use CNN to do semantic segmentation. For an example look at
https://people.eecs.berkeley.edu/~jonlong/long_shelhamer_fcn.pdf
You can use the same architecture has they are using but for the first layer instead of using filters for 3 channels use filters for 8 channels.
For the loss function you can use the same loos function or something that is more specific for binary loss.
There are several implementation for keras but with tensorflow
backend
https://github.com/JihongJu/keras-fcn
https://github.com/aurora95/Keras-FCN
Since the input is in the form of channel values,that too in sequence.I would suggest you to use Convolution1D. Here,you are taking each pixel's channel values as the input and you need to predict for each pixel.Try this
eg :
Conv1D(filters, kernel_size, strides=1, padding='valid')
Conv1D()
MaxPooling1D(pool_size)
......
(Add many layers as you want)
......
Dense(1)
use binary_crossentropy as the loss function.
I'm trying to create an example using the Keras built in the latest version of TensorFlow from Google. This example should be able to classify a classic image of an elephant. The code looks like this:
# Import a few libraries for use later
from PIL import Image as IMG
from tensorflow.contrib.keras.python.keras.preprocessing import image
from tensorflow.contrib.keras.python.keras.applications.inception_v3 import InceptionV3
from tensorflow.contrib.keras.python.keras.applications.inception_v3 import preprocess_input, decode_predictions
# Get a copy of the Inception model
print('Loading Inception V3...\n')
model = InceptionV3(weights='imagenet', include_top=True)
print ('Inception V3 loaded\n')
# Read the elephant JPG
elephant_img = IMG.open('elephant.jpg')
# Convert the elephant to an array
elephant = image.img_to_array(elephant_img)
elephant = preprocess_input(elephant)
elephant_preds = model.predict(elephant)
print ('Predictions: ', decode_predictions(elephant_preds))
Unfortunately I'm getting an error when trying to evaluate the model with model.predict:
ValueError: Error when checking : expected input_1 to have 4 dimensions, but got array with shape (299, 299, 3)
This code is taken from and based on the excellent example coremltools-keras-inception and will be expanded more when it is figured out.
The reason why this error occured is that model always expects the batch of examples - not a single example. This diverge from a common understanding of models as mathematical functions of their inputs. The reasons why model expects batches are:
Models are computationaly designed to work faster on batches in order to speed up training.
There are algorithms which takes into account the batch nature of input (e.g. Batch Normalization or GAN training tricks).
So four dimensions comes from a first dimension which is a sample / batch dimension and then - the next 3 dimensions are image dims.
Actually I found the answer. Even though the documentation states that if the top layer is included the shape of the input vector is still set to take a batch of images. Thus we need to add this before the code line for the prediction:
elephant = numpy.expand_dims(elephant, axis=0)
Then the tensor is in the right shape and everything works correctly. I am still uncertain why the documentation states that the input vector should be (3x299x299) or (299x299x3) when it clearly wants 4 dimensions.
Be careful!
Now I'm using fb torch library from github fb torch resnet
It's my first time to use torch and lua, so Im encountering some problems.
My goal is to save the feature vector of specific layer (last avg pooling of resnet) into a one file with the class of the input image. All input images are from cifar-10 db.
The file format that i want to get is like belows
image1.txt := class index of image and feature vector of image 1 of cifar-10
image2.txt := class index of image and feature vector of image 2 of cifar-10
// and so on through all images of cifar-10
Now I have seen some sample code of that github extract-features.lua
Because it's my first time for lua, I feel so hard to understand this code and to modify to the way i want. And i don't want my data to save into t7 file format.
How can i access only one specific layer from network in torch via lua? (last average pooling)
How can i access values of the layer and classification result index?
How can read all each images from cifar-10 db file(t7 batch)?
Sorry for too many questions. But im feeling hard using torch because of pool amouns of community threads and posting of torch.. please understand me.
How can i access only one specific layer from network in torch via lua? (last average pooling)
To access each layer you just have to load the model and get it using an integer number. If you do print model you will be able to see in which position the last average pooling is.
model = torch.load(path_to_model):cuda()
avg_pooling_layer = model:get(position_of_the_avg_pooling_layer)
How can i access values of the layer and classification result index?
I do not quite understand what you mean by this. If you want to see the output or the weights from a specific layer. (following the code above) You need to get these elements from the layer table. Again, to see which ones are the possible elements to get use print avg_pooling_layer
weights = avg_pooling_layer.weight -- get the weights of the layer
output = avg_pooling_layer.output -- get the output of the layer
How can read all each images from cifar-10 db file(t7 batch)?
To read the images from a t7 file use the torch function torch.load. (used before to load the model).
cifar_10 = torch.load("path_to_cifar-10.t7")
Once loaded you could have the training and test set in subtables or functions. Again, print the table and visualize which values are the ones you need to get.
Hope this helps!
I'm trying out the Haarcascade based FaceDetection using the GPU module in OpenCV 2.3.1.
My code is compiling and sometimes it shows the initial frame with one or more ROI-rectangles drawn onto the output frame to highlight detected objects.
But after the 2nd or 3rd repeated call of this detector method it just crashes. The compiler says SIGABRT. Any suggestions on this?
Here's the code:
cv::Mat ProcessorWidget::detectGPU(Mat &img) {
cv::gpu::CascadeClassifier_GPU cascade_gpu(QFileDialog::getOpenFileName(this).toStdString());
img.copyTo(image_cpu);
gpu::GpuMat image_gpu(image_cpu);
gpu::GpuMat objbuf;
int detections_number = cascade_gpu.detectMultiScale( image_gpu,
objbuf, 1.2);
Mat obj_host;
// download only detected number of rectangles
objbuf.colRange(0, detections_number).download(obj_host);
Rect* faces = obj_host.ptr<Rect>();
for(int i = 0; i < detections_number; ++i)
cv::rectangle(image_cpu, faces[i], Scalar(255));
return image_cpu;
}
Another point is that some of the Haarcascade Classifiers coming with OpenCV will always crash my application when i use them. But some other classifiers always work on the first frame and then crash a few frames later.
BTW I initialize the classifier from within this method just for testing purposes. Inititalizing it just once when constructing the ProcessorObject didn't help either ...
Could the classifier-XML's be incompatible somehow?
Thanks in Advance!
directly from the docs:
Only the old haar classifier (trained by the haar training application) and NVIDIA’s nvbin are supported.