Using transfer learning, I trained SSD MobileNetV2 (ssd_mobilenet_v2_coco.config) model in TensorFlow (tensorflow-gpu==1.15.0). After freezing the graph (.pb) using TensorFlow API Python script (export_inference_graph.py), I created a text graph (.pbtxt) using the Python script provided in OpenCV wiki (tf_text_graph_ssd.py).
I used the Python code snippet from the wiki to test inference, but I am getting the following error:
cv2.error: OpenCV(4.2.0) C:\projects\opencv-python\opencv\modules\dnn\src\dnn.cpp:562: error: (-2:Unspecified error) Can't create layer "FeatureExtractor/MobilenetV2/expanded_conv_2/add" of type "AddV2" in function 'cv::dnn::dnn4_v20191202::LayerData::getLayerInstance'
I am using Windows 10, Python 3.6.8, and OpenCV 4.2.0.32. I have tried downgrading OpenCV, but earlier versions give different errors.
However, in Ubuntu 18.04.4, installing OpenCV from source, I do not get any errors. Does anybody know if this is an incompatible layer in binary wheels of OpenCV for Windows? Should I wait until the next release?
Related
I'm trying to use OpenCV and YOLOv4 / YOLOv4 tiny weights and .cfg to make object detection predictions. My code doesn't want to run and keeps running into an error at
modelConfiguration = "yolov4.cfg";
modelWeights = "yolov4.weights";
# Load Yolo
net = cv2.dnn.readNetFromDarknet(modelConfiguration, modelWeights)
Do I need to have Darknet installed on my machine for this to work? I assumed I could use the exported weights and cfg with OpenCV without this?
I'm having trouble installing Darknet on my machine because I don't have admin rights. Is there a way around this? Can I use an ONNX file?
With opencv dnn module no need to install darknet or tensorflow or any other framework.
Im trying out this article;
https://towardsdatascience.com/deep-learning-based-super-resolution-with-opencv-4fd736678066
this is the code copied out of the article;
import cv2
from cv2 import dnn_superres
sr = dnn_superres.DnnSuperResImpl_create()
image = cv2.imread('./input.png')
path = "EDSR_x3.pb"
sr.readModel(path)
sr.setModel("edsr", 3)
result = sr.upsample(image)
cv2.imwrite("./upscaled.png", result)
i also tried opencv Super Resolution Tutorial;
https://docs.opencv.org/master/d5/d29/tutorial_dnn_superres_upscale_image_single.html
import cv2
from cv2 import dnn_superres
sr = dnn_superres.DnnSuperResImpl_create()
image = cv2.imread('./image.png')
path = "EDSR_x4.pb"
sr.readModel(path)
sr.setModel("edsr", 4)
result = sr.upsample(image)
cv2.imwrite("./upscaled.png", result)
My enviroment is anaconda3 opencv 4.3.0.
I either get the error from the title or i get "killed" when i run the opencv example.
*My file directory is all on the same level of the sample codes. I would just change my image file names.
I did try to compile opencv and opencv_contrib from cmake but, i didn't know how to have python refer to opencv and opencv_contrib from source.
ifollow this documentation to install opencv from source;
https://d*ocs.opencv.org/3.4/d2/de6/tutorial_py_setup_in_ubuntu.html
I opted to use anaconda wrapping of opencv 4.3.0 because i ran into too many dependency and wrongly installed package problems.
My friend from a meetup managed to apply the code from the article just as the article depicted while i tried to follow exactly what he did, using an anaconda enviorment. Would my problem stem from my virtual enviorment or opencv package version or the code itself? i did have another colleague run my code from my github branch and he had my exact same problems. How should i apprach the bugs im having and apply the super resolution examples i found?
The error 'Model not specified' comes from the fact that the Net is empty. You have to actually download the model and then provide the path to the 'sr.readModel()' function.
If you did that and it still does not work, you could try these two things:
Try a smaller image (in .png format).
Build OpenCV from source. Do not forget the contrib modules (this is where the dnn_superres module resides). You said you had a problem linking python. I would suggest to use this tutorial. After following that tutorial, do the following command (after you already did sudo make install), to link the python libraries:
sudo ldconfig
I am trying to run some old matlab code with octave. Unfortunately this code contains a geotiffread function and I think I should change this function with rasterread (package mapping).
However, when I try to install the mapping package I get this warning:
octave:7> pkg install mapping-1.4.0.tar.gz
configure: WARNING: GDAL library not found. Reading of raster files will be disabled.
For information about changes from previous versions of the mapping package, run 'news mapping'.
I tried to run octave (5.2.0 version) within:
a Debian Buster distribution (snap and flatpak package)
a docker container (MacOS 10.15 host, installed from the mtmiller/octave image).
online with the octave-online service, running this code:
pkg load mapping;
[bands, info] = rasterread ('mexutm250.tiff');
With this output:
octave:3> source("my_script.m")
error: gdalread: reading of raster file with GDAL was disabled during installation
error: called from
rasterread at line 56 column 26
my_script at line 2 column 15
No attempt was successful.
EDIT 2: I know that my octave installations are without GDAL support. I would like to use octave with full mapping package, and GDAL support, without recompile it. There is a way to do it (e.g. update a library path within the docker installation to add the libgdal library)?
If there is no way to add GDAL support without recompile octave, there is a guide to do it with minimal effort?
EDIT 3: I already installed the gdal dependencies:
$ sudo aptitude search gdal |grep ^i
[sudo] password for virtuser:
i gdal-bin - Geospatial Data Abstraction Library - programmi di utilità
i A gdal-data - libreria Geospatial Data Abstraction Library - file di dati
i libgdal-dev - libreria Geospatial Data Abstraction Library - file di sviluppo
i libgdal20 - libreria Geospatial Data Abstraction Library
Thank you.
I got octave with GDAL integration when I installed the octave package from the debian repository. I needed octave 5.2, so I switched to Ubuntu 20.04.
as suggested in one of the comments, checking
>> news mapping
(also at https://octave.sourceforge.io/mapping/NEWS.html)
looking at mapping 1.2.1 where rasterread was introduced, it states:
** New features
Reading GIS raster data: A first go is provided using
functions rasterread.m and rasterinfo.m. Both invoke binary
function gdalread() of which an initial version was provided
by Shashank Khare. rasterread.m and rasterinfo.m can read
and return info on any raster data type that the underlying
GDAL library can read. As such, separate functions for e.g.,
GeoTIFF and ArcGrid etc. are not required.
To make use of these functions the GDAL library must be
present on your system => GDAL is a suggested dependency.
You should be able to install the GDAL library in Debian using your preferred installation method.
Unsure whether or not you'll need to uninstall/reinstall the mapping package afterward, but if an unload/reload doesn't get rid of the message, try that and see if mapping is able to see the library.
Could you please point me to the documentation sample showcasing how to put together pytorch dependencies for training on AzureML?
Few related questions to the scenario of running pytorch training workloads on AzureML:
How can I set cuda version to 10.1?
Could you please point to sample demonstrating how to use “official” pytorch docker https://hub.docker.com/r/pytorch/pytorch (which should have all cuda stuff https://github.com/pytorch/pytorch/blob/master/docker/pytorch/Dockerfile)?.
I’ve found distributed-pytorch-with-horovod.yml in the docs but it does not mention any pytorch dependencies -- am I looking in the right place?
Install Pytorch with CUDA Version 10.1 with the following command on windows
pip3 install torch===1.3.1 torchvision===0.4.2 -f https://download.pytorch.org/whl/torch_stable.htm.
From .yml file:
https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/ml-frameworks/pytorch/deployment/train-hyperparameter-tune-deploy-with-pytorch/train-hyperparameter-tune-deploy-with-pytorch.yml
Please follow the below documents for pytorch on Azure.
https://notebooks.azure.com/pytorch
https://azure.microsoft.com/en-us/blog/pytorch-on-azure-full-support-for-pytorch-1-2/
I am trying to run the provided ARIMA model example (Spark spark-ts library) with the ARIMA test data using Java API based on 0.4.0 jar. I am using "ARIMA.autoFit(ts, 1, 1, 1);" for fitting the model.
However, I get two warnings as below, after which execution halts without any further progress or errors:
WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeSystemBLAS
WARN BLAS: Failed to load implementation from: com.github.fommil.netlib.NativeRefBLAS
Tried Google & installed "libgfortran3", but to no avail.
Any suggestions?
Thanks
I fixed the issue by building the jar from scratch via Maven rather than using the pre-built jar. Also, I built it on the Ubuntu machine where Spark runs.