Dislcaimer: I have never used openCV or openVINO or for the fact anything even close to ML before. However I've been slamming my head studying neural-networks(reading material online) because I've to work with intel's openVINO on an edge device.
Here's what the official documentation says about using openCV with openVINO(using openVINO's inference engine with openCV).
->Optimize the pretrained model with openVINO's model optimizer(creating the IR file pair)
use these IR files with
openCV's dnn.readnet() //this is where the inference engine gets set?
https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_raspbian.html
Tried digging more and found a third party reference. Here a difference approach is taken.
->Intermediatte files (bin/xml are not created. Instead caffe model file is used)
->the inference engine is defined explicitly with the following line
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
https://www.learnopencv.com/using-openvino-with-opencv/
Now I know to utilize openCV we have to use it's inference engine with pretrained models. I want to know which of the two approach is the correct(or preferred) one, and if rather I'm missing out no something.
You can get started using OpenVino from: https://docs.openvinotoolkit.org/latest/_docs_install_guides_installing_openvino_windows.html
You would require a set of pre-requsites to run your sample. OpenCV is your Computer Vision package which can used for Image processing.
Openvino inference requires you to convert any of your trained models(.caffemodel,.pb,etc.) to Intermediate representations(.xml,.bin) files.
For a better understanding and sample demos on OpenVino, watch the videos/subscribe to the OpenVino Youtube channel: https://www.youtube.com/channel/UCkN8KINLvP1rMkL4trkNgTg
If the topology that you are using is supported by OpenVino,the best way to use is the opencv that comes with openvino. For that you need to
1.Initialize the openvino environment by running the setupvars.bat in your openvino path(C:\Program Files (x86)\IntelSWTools\openvino\bin)
2.Generate the IR file (xml&bin)for your model using model optimizer.
3.Run using inference engine samples in the path /inference_engine_samples_build/
If the topology is not supported, then you can go for the other procedure that you mentioned.
The most common issues I ran into:
setupvars.bat must be run within the same terminal, or use os.environ["varname"] = varvalue
OpenCV needs to be built with support for the inference engines (ie DLDT). There are pre-built binaries here: https://github.com/opencv/opencv/wiki/Intel%27s-Deep-Learning-Inference-Engine-backend
Target inference engine: net.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
Target NCS2: net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD)
The OpenCV pre-built binary located in the OpenVino directory already has IE support and is also an option.
Note that the Neural Compute Stick 2 AKA NCS2 (OpenVino IE/VPU/MYRIAD) requires FP16 model formats (float16). Also try to keep you image in this format to avoid conversion penalties. You can input images as any of these formats though: FP32, FP16, U8
I found this guide helpful: https://learnopencv.com/using-openvino-with-opencv/
Here's an example targetting the NCS2 from https://medium.com/sclable/intel-openvino-with-opencv-f5ad03363a38:
# Load the model.
net = cv2.dnn.readNet(ARCH_FPATH, MODEL_FPATH)
# Specify target device.
net.setPreferableBackend(cv2.dnn.DNN_BACKEND_INFERENCE_ENGINE)
net.setPreferableTarget(cv2.dnn.DNN_TARGET_MYRIAD) # NCS 2
# Read an image.
print("Processing input image...")
img = cv2.imread(IMG_FPATH)
if img is None:
raise Exception(f'Image not found here: {IMG_FPATH}')
# Prepare input blob and perform inference
blob = cv2.dnn.blobFromImage(img, size=(672, 384), ddepth=cv2.CV_8U)
net.setInput(blob)
out = net.forward()
# Draw detected faces
for detect in out.reshape(-1, 7):
conf = float(detect[2])
xmin = int(detect[3] * frame.shape[1])
ymin = int(detect[4] * frame.shape[0])
xmax = int(detect[5] * frame.shape[1])
ymax = int(detect[6] * frame.shape[0])
if conf > CONF_THRESH:
cv2.rectangle(img, (xmin, ymin), (xmax, ymax), color=(0, 255, 0))
There are more samples here (jupyter notebook/python): https://github.com/sclable/openvino_opencv
Related
My task is to perform inference for face detection using Intel Movidius and Raspberry Pi. The error is that the model only returns "Scores" -> (1, 3000, 2) and not "Boxes".
Steps:
On my local machine, I trained several models(mb1-ssd, mb1-ssd-lite, vgg16-ssd) from the repository https://github.com/qfgaohao/pytorch-ssd and converted them to onnx. Then, using open vino model optimizer from openvinotoolkit = 2020.1, I obtained the '.bin', '.xml' files for each model.
Then, using the obtained files, I performed the infference on the Rasberry Pi and hit the mentioned error.
Note: The inference works using pretrained face detection models from model zoo, the only difference I found looking at the .xml files and my .xml files is that the last layer, "Detection output" is missing. However, when I visualize the .xml file using netron, the conversion seems to be correct.
Link to repo: https://github.com/cocacola0/bsc_thesis
OpenVINO™ 2020.3 release is the last OpenVINO™ version that supports Intel® Movidius™ Neural Compute Stick powered by the Intel® Movidius™ Myriad™ 2.
Use ssd_mobilenet_v2_coco and ssdlite_mobilenet_v2, alternative models that are available in Open Model Zoo. Both models are working well with your code.
I have created a new tflite model based on MobilenetV2. It works well without quantization using CPU on iOS. I should say that TensorFlow team did a great job, many thanks.
Unfortunately there is a problem with latency. I use iPhone5s to test my model, so I have the following results for CPU:
500ms for MobilenetV2 with 224*224 input image.
250-300ms for MobilenetV2 with 160*160 input image.
I used the following pod 'TensorFlowLite', '~> 1.13.1'
It's not enough, so I have read TF documentation related to optimization (post trainig quantization). I suppose I need to use Float16 or UInt8 quantization and GPU Delegate (see https://www.tensorflow.org/lite/performance/post_training_quantization).
I used Tensorflow v2.1.0 to train and quantize my models.
Float16 quantization of weights (I used MobilenetV2 model after Float16 quantization)
https://github.com/tensorflow/examples/tree/master/lite/examples/image_segmentation/ios
pod 'TensorFlowLiteSwift', '0.0.1-nightly'
No errors, but model doesn’t work
pod 'TensorFlowLiteSwift', '2.1.0'
2020-05-01 21:36:13.578369+0300 TFL Segmentation[6367:330410] Initialized TensorFlow Lite runtime.
2020-05-01 21:36:20.877393+0300 TFL Segmentation[6367:330397] Execution of the command buffer was aborted due to an error during execution. Caused GPU Hang Error (IOAF code 3)
Full integer quantization of weights and activations
pod ‘TensorFlowLiteGpuExperimental’
Code sample: https://github.com/makeml-app/MakeML-Nails/tree/master/Segmentation%20Nails
I used a MobilenetV2 model after uint8 quantization.
GpuDelegateOptions options;
options.allow_precision_loss = true;
options.wait_type = GpuDelegateOptions::WaitType::kActive;
//delegate = NewGpuDelegate(nullptr);
delegate = NewGpuDelegate(&options);
if (interpreter->ModifyGraphWithDelegate(delegate) != kTfLiteOk)
Segmentation Live[6411:331887] [DYMTLInitPlatform] platform initialization successful
Loaded model 1resolved reporterDidn't find op for builtin opcode 'PAD' version '2'
Is it possible to use MObilenetV2 quantized model on IOS somehow? Hopefully I did some mistake :) and it's possible.
Best regards,
Dmitriy
This is a link to GITHUB issue with answers: https://github.com/tensorflow/tensorflow/issues/39101
sorry for outdated documentation - the GPU delegate should be included in the TensorFlowLiteSwift 2.1.0. However, looks like you're using C API, so depending on TensorFlowLiteC would be sufficient.
MobileNetV2 do work with TFLite runtime in iOS, and if I recall correctly it doesn't have PAD op. Can you attach your model file? With the information provided it's a bit hard to see what's causing the error. As a sanity check, you can get quant/non-quant version of MobileNetV2 from here: https://www.tensorflow.org/lite/guide/hosted_models
For int8 quantized model - afaik GPU delegate only works for FP32 and (possibly) FP16 inputs.
I trained a CNN model by using Torch (Lua) and then loaded it on OpenCV Java. The model was structured to get 112*112 as an input image. However I accidentally put 128*128 to the model.
I expected an error, but the model just ran smoothly and made some results. Why is it? Does OpenCV just ignore surplus parts of the input?
Below is a part of my code:
Mat bgrImage = bgrImages.get(i);
Mat inputBlob = Dnn.blobFromImage(bgrImage);
objectNet.setInput(inputBlob);
Mat fwdResultMat = objectNet.forward();
I need to improve image quality, from low quality to high hd quality. I am using OpenCV libraries. I experimented a lot with GaussianBlur(), Laplacian(), transformation functions, filter functions etc, but all I could succeed is to convert image to hd resolution and keep the same quality. Is it possible to do this? Do I need to implement my own algorithm or is there a way how it's done? I will really appreciate any kind of help. Thanks in advance.
I used this link for my reference. It has other interesting filters that you can play with.
If you are using C++:
detailEnhance(Mat src, Mat dst, float sigma_s=10, float sigma_r=0.15f)
If you are using python:
dst = cv2.detailEnhance(src, sigma_s=10, sigma_r=0.15)
The variable 'sigma_s' determines how big the neighbourhood of pixels must be to perform filtering.
The variable 'sigma_r' determines how the different colours within the neighbourhood of pixels will be averaged with each other. Its range is from: 0 - 1. A smaller value means similar colors will be averaged out while different colors remain as they are.
Since you are looking for sharpness in the image, I would suggest you keep the kernel as minimum as possible.
Here is the result I obtained for a sample image:
1. Original image:
2. Sharpened image for lower sigma_r value:
3. Sharpened image for higher sigma_r value:
Check the above mentioned link for more information.
How about applying Super Resolution in OpenCV? A reference article with more details can be found here: https://learnopencv.com/super-resolution-in-opencv/
So basically you will need to have the Python dependency opencv-contrib-python installed, together with a working version of opencv-python.
There are different techniques for the Super Resolution in OpenCV you can choose from, including EDSR, ESPCN, FSRCNN, and LapSRN. Code examples in both Python and C++ have been included in the tutorial article as well for easy reference.
A correction is needed
dst = cv2.detailEnhance(src, sigma_s=10, sigma_r=0.15)
using kernel will give error.
+1 to kris stern answer,
If you are looking for practical implementation of super resolution using pretrained model in OpenCV, have a look at below notebook also video describing details.
https://github.com/pankajr141/experiments/blob/master/Reasoning/ComputerVision/super_resolution_enhancing_image_quality_using_pretrained_models.ipynb
https://www.youtube.com/watch?v=JrWIYWO4bac&list=UUplf_LWNn0a9ubnKCZ-95YQ&index=4
Below is a sample code using opencv
model_pretrained = cv2.dnn_superres.DnnSuperResImpl_create()
# setting up the model initialization
model_pretrained.readModel(filemodel_filepath)
model_pretrained.setModel(modelname, scale)
# prediction or upscaling
img_upscaled = model_pretrained.upsample(img_small)
I haven't found any method to train new latent svm detector models using openCV. I'm currently using the existing models given in the xml files, but I would like to train my own.
Is there any method for doing so?
Thank you,
Gil.
As of now only DPM-detection is implemented in OpenCV, not training.
If you want to train your own models, the most reliable approach is to use Felzenszwalb's and Girshick's matlab code (most of the heavy stuff is implemented in C) (http://www.cs.berkeley.edu/~rbg/latent/)(http://www.rossgirshick.info/latent/) It is reliable and works reasonably fast
If you want to do it in C-only, there is an implementation here (http://libccv.org/doc/doc-dpm/) that I haven't tried myself.
I think there is a function in the octave version of the author's code here
(Octave Version of DPM). It is in step #5,
mat2opencvxml('./INRIA/inriaperson_final.mat', 'inriaperson_cascade_cv.xml');
I will try it and let you know about the result.
EDIT
I tried to convert the .mat file from the octave version i mentioned before to .xml file, and compared the result with the built in opencv .xml model and the construction of the 2 xmls was different (tags, #components,..), it seems that this version of octave dpm generates xml files for later opencv version (i am using 2.4).
VOC-release3.1 is the one matches opencv2.4.14. I tried to convert the already trained model from this version using mat2xml function available in opencv and the result xml file is successfully loaded and working with opencv. Here are some helpful links:
mat2xml code
VOC-release-3.1
How To Train DPM on a New Object