OpenCV TBB IPP OpenMP functions - opencv

Is there a list of functions/methods of OpenCV that have been optimized with IPP and/or TBB and/or OpenMP?

Disclaimer: I have no experience in OpenCV usage.
I found no such a list on the official opencv.org site. However, the ChangeLog says:
switched all the remaining parallel loops from TBB-only tbb::parallel_for() to universal cv::parallel_for_() with many possible backends (MS Concurrency, Apple's GDC, OpenMP, Intel TBB etc.)
Now, we know what to search and grep -IRl parallel_for_ applied on opencv directory gives us the following:
build/include/opencv2/core/core.hpp
sources/apps/traincascade/boost.cpp
sources/modules/calib3d/src/stereobm.cpp
sources/modules/contrib/src/basicretinafilter.cpp
sources/modules/contrib/src/magnoretinafilter.cpp
sources/modules/contrib/src/parvoretinafilter.cpp
sources/modules/contrib/src/retinacolor.cpp
sources/modules/contrib/src/templatebuffer.hpp
sources/modules/core/include/opencv2/core/core.hpp
sources/modules/core/src/matrix.cpp
sources/modules/core/src/parallel.cpp
sources/modules/core/src/stat.cpp
sources/modules/features2d/src/detectors.cpp
sources/modules/gpu/src/calib3d.cpp
sources/modules/highgui/test/test_ffmpeg.cpp
sources/modules/imgproc/src/clahe.cpp
sources/modules/imgproc/src/color.cpp
sources/modules/imgproc/src/distransform.cpp
sources/modules/imgproc/src/generalized_hough.cpp
sources/modules/imgproc/src/histogram.cpp
sources/modules/imgproc/src/imgwarp.cpp
sources/modules/imgproc/src/morph.cpp
sources/modules/imgproc/src/smooth.cpp
sources/modules/imgproc/src/thresh.cpp
sources/modules/ml/src/ann_mlp.cpp
sources/modules/ml/src/gbt.cpp
sources/modules/ml/src/knearest.cpp
sources/modules/ml/src/nbayes.cpp
sources/modules/ml/src/svm.cpp
sources/modules/nonfree/src/surf.cpp
sources/modules/objdetect/src/cascadedetect.cpp
sources/modules/objdetect/src/haar.cpp
sources/modules/objdetect/src/hog.cpp
sources/modules/ocl/src/kmeans.cpp
sources/modules/photo/src/denoising.cpp
sources/modules/stitching/src/matchers.cpp
sources/modules/superres/src/btv_l1.cpp
sources/modules/video/src/bgfg_gaussmix2.cpp
sources/modules/video/src/bgfg_gmg.cpp
sources/modules/video/src/lkpyramid.cpp
sources/modules/video/src/tvl1flow.cpp
Here, we see the list of modules and parts which use the parallel loop. I hope it's enough to answer the question for TBB and OpenMP. For more details, please open the corresponding file and search for parallel_for_ to find out in which circumstances it is applied.
As for IPP, it seems it is quite extensively used by the core library, egrep -IRl '\bipp' gives the following:
modules/calib3d/src/calibration.cpp
modules/core/include/opencv2/core/core_c.h
modules/core/include/opencv2/core/internal.hpp
modules/core/src/arithm.cpp
modules/core/src/dxt.cpp
modules/core/src/mathfuncs.cpp
modules/core/src/matmul.cpp
modules/core/src/precomp.hpp
modules/core/src/stat.cpp
modules/core/src/system.cpp
modules/imgproc/src/canny.cpp
modules/imgproc/src/color.cpp
modules/imgproc/src/deriv.cpp
modules/imgproc/src/distransform.cpp
modules/imgproc/src/filter.cpp
modules/imgproc/src/imgwarp.cpp
modules/imgproc/src/morph.cpp
modules/imgproc/src/samplers.cpp
modules/imgproc/src/smooth.cpp
modules/imgproc/src/sumpixels.cpp
modules/legacy/test/test_pyrsegmentation.cpp
modules/objdetect/src/haar.cpp
modules/objdetect/src/hog.cpp
modules/ocl/src/haar.cpp

Related

Ada + Machine Learning (Python Framework)

I'm trying to write a simple machine learning application in Ada, and also trying to find a good framework to use. My knowledge of one thing is extremely minimal, and of the other is somewhat minimal.
There are several nifty machine learning frameworks out there, and I'd like to leverage one for use with an Ada program, but I guess I'm just...at a loss. Can I use an existing framework written in Python, for instance and wrap (or I guess, bind?) the API calls in Ada? Should I just pass off the scripting capabilities? I'm trying to figure it out.
Case in point: Scikit (sklearn)
https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html#
This does some neat stuff, and I'd like to be able to leverage this, but with an Ada program. Does anyone have advice from a similar experience?
I am just researching, so I have tried finding information.
http://www.inspirel.com/articles/Ada_Python_Binding.html
https://scikit-learn.org/stable/tutorial/text_analytics/working_with_text_data.html#
The inspirel solution is based on python2.7. If you're using anything from python3.5 onwards a few mods need to be made. On Linux, changing to say python 3.7, you'd just change
--for Default_Switches ("Ada") use ("-lpython2.7");
for Default_Switches ("Ada") use ("-lpython3.7");
but on windows, the libraries aren't dumped in a community lib so gnat doesn't know where to find them. All the packages are kept separately. The -L has to be added to tell the linker where to find the library. Alternatively, you can use for lib_dir. In my case, I did a non-admin install of python, so it looks something like
for Default_Switches ("Ada") use ("-L\Users\StdUser\AppData\Local\Programs\Python\Python37-32\libs", "-lpython37");
Note that on windows, the library is called python37: not python3.7. Use gprbuild instead of gnatmake -p, which has been deprecated. If you do all your mods correctly
gprbuild ada_main.gpr
should give you an executable in obj\ada_main.exe if it builds. If a later version of python is used, some edits need to be made
python_module.py
#print 'Hello from Python module'
print('Hello from Python module')
#print 'Python adding:', a, '+', b
print('Python adding:', a, '+', b)
ada_main.adb
-- Python.Execute_String("print 'Hello from Python!'");
Python.Execute_String("print('Hello from Python!')");
Some routines have been deprecated so the linkage has to change
python.adb
--pragma Import(C, PyInt_AsLong, "PyInt_AsLong");
pragma Import(C, PyInt_AsLong, "PyLong_AsLong");
--pragma Import(C, PyString_FromString, PyString_FromString");
pragma Import(C, PyString_FromString, "PyUnicode_FromString");
Running the build and executable should give
C:\Users\StdUser\My Documents\ada-python>gprbuild ada_main.gpr
Compile
[Ada] ada_main.adb
Bind
[gprbind] ada_main.bexch
[Ada] ada_main.ali
Link
[link] ada_main.adb
C:\Users\StdUser\My Documents\ada-python>obj\ada_main.exe
executing Python directly from Ada:
Hello from Python!
loading external Python module and calling functions from that module:
Hello from Python module!
asking Python to add two integers:
Python adding: 10 + 2
Ada got result from Python: 12
we can try other operations, too:
subtract: 8
multiply: 20
divide : 5
Remember to put the pythonxx.dll somewhere on your path otherwise it won't be able to find the library when it starts executing.

Pointcloud Visualization in Drake Visualizer in Python

I would like to visualize pointcloud in drake-visualizer using python binding.
I imitated how to publish images through lcm from here, and checked out these two issues (14985, 14991). The snippet is as follows :
point_cloud_to_lcm_point_cloud = builder.AddSystem(PointCloudToLcm())
point_cloud_to_lcm_point_cloud.set_name('pointcloud_converter')
builder.Connect(
station.GetOutputPort('camera0_point_cloud'),
point_cloud_to_lcm_point_cloud.get_input_port()
)
point_cloud_lcm_publisher = builder.AddSystem(
LcmPublisherSystem.Make(
channel="DRAKE_POINT_CLOUD_camera0",
lcm_type=lcmt_point_cloud,
lcm=None,
publish_period=0.2,
# use_cpp_serializer=True
)
)
point_cloud_lcm_publisher.set_name('point_cloud_publisher')
builder.Connect(
point_cloud_to_lcm_point_cloud.get_output_port(),
point_cloud_lcm_publisher.get_input_port()
)
However, I got the following runtime error:
RuntimeError: DiagramBuilder::Connect: Mismatched value types while connecting output port lcmt_point_cloud of System pointcloud_converter (type drake::lcmt_point_cloud) to input port lcm_message of System point_cloud_publisher (type drake::pydrake::Object)
When I set 'use_cpp_serializer=True', the error becomes
LcmPublisherSystem.Make(
File "/opt/drake/lib/python3.8/site-packages/pydrake/systems/_lcm_extra.py", line 71, in _make_lcm_publisher
serializer = _Serializer_[lcm_type]()
File "/opt/drake/lib/python3.8/site-packages/pydrake/common/cpp_template.py", line 90, in __getitem__
return self.get_instantiation(param)[0]
File "/opt/drake/lib/python3.8/site-packages/pydrake/common/cpp_template.py", line 159, in get_instantiation
raise RuntimeError("Invalid instantiation: {}".format(
RuntimeError: Invalid instantiation: _Serializer_[lcmt_point_cloud]
I saw the cpp example here, so maybe this issue is specific to python binding.
I also saw this python example, but thought using 'PointCloudToLcm' might be more convenient.
P.S.
I am aware of the development in recent commits on MeshcatVisualizerCpp and MeshcatPointCloudVisualizerCpp, but I am still on the drake-dev stable build 0.35.0-1 and want to stay on drake visualizer until the meshcat c++ is more mature.
The old version in pydrake.systems.meshcat_visualizer.MeshcatVisualizer is a bit too slow on my current use-case (multiple objects drop). I can visualize the pointcloud with this visualization setting, but it took too much machine resources.
Only the message types that are specifically bound in lcm_py_bind_cpp_serializers.cc can be used on an LCM message input/output port connection between C++ and Python. For all other LCM message types, the input/output port connection must be from a Python system to a Python system or a C++ System to a C++ System.
The lcmt_image_array is listed there, but not the lcmt_point_cloud.
If you're stuck using Drake's v0.35.0 capabilities, then I don't see any great solutions. Some options:
(1) Write your own PointCloudToLcm system in Python (by re-working the C++ code into Python, possibly with a narrower set of supported features / channels for simplicity).
(2) Write your own small C++ helper function MakePointCloudPublisherSystem(...) that calls LcmPublisherSystem::Make<lcmt_point_cloud> function in C++, and bind it into Python. Then your Python code can call MakePointCloudPublisherSystem() and successfully connect that to the existing C++ PointCloudToLcm.

is that possible to do igblast(Analyze immunoglobulin (Ig) sequences) with python/biopython

I'm fresh to python/biopython and tried blast using biopython like this:
from Bio.Blast import NCBIWWW
fasta_string = open("C:\\xxxx\\xxxx\\xxxx\\abc.fasta").read()
result_handle = NCBIWWW.qblast("blastp", "nr", fasta_string)
print result_handle.read()
I was wondering if it is possible to run igblast using biopython. I have searched for this but it seems no one is really doing this.
I haven't tried them, but I have found:
pyigblast.
PyIgBlast - Open source Parser to call IgBlast and parse results for
high-throughput sequencing. Uses Python multi-processing to get around
bottlenecks of IgBlast multi-threading. Parses blast output to
deliminated files (csv,json) for uploading to databases. Can connect
directly with mysql and mongo instances to insert directly.
The code uses Python 2.7 and BioPython.
PyIR
Immunoglobulin and T-Cell receptor rearrangement software. A Python wrapper for IgBLAST that scales to allow for the parallel processing of millions of reads on shared memory computers. All output is stored in a convenient JSON format.
The code uses Python 3.6 and BioPython.
Both tools use a locally installed igblastn executable. You can install igblastn locally with conda install igblast.

Using TensorFlow Audio Recognition Model on iOS

I'm trying to use the TensorFlow audio recognition model (my_frozen_graph.pb, generated here: https://www.tensorflow.org/tutorials/audio_recognition) on iOS.
But the iOS code NSString* network_path = FilePathForResourceName(#"my_frozen_graph", #"pb"); in the TensorFlow Mobile's tf_simple_example project outputs this error message: Could not create TensorFlow Graph: Not found: Op type not registered 'DecodeWav'.
Anyone knows how I can fix this? Thanks!
I believe you are using the pre-build Tensorflow from Cocapods? It probably does not have that op type, so you should build it yourself from latest source.
From documentation:
While Cocapods is the quickest and easiest way of getting started, you
sometimes need more flexibility to determine which parts of TensorFlow
your app should be shipped with. For such cases, you can build the iOS
libraries from the sources. This guide contains detailed instructions
on how to do that.
This might also be helpful: [iOS] Add optional Selective Registration of Ops #14421
Optimization
The build_all_ios.sh script can take optional
command-line arguments to selectively register only for the operators
used in your graph.
tensorflow/contrib/makefile/build_all_ios.sh -a arm64 -g $HOME/graphs/inception/tensorflow_inception_graph.pb
Please note this
is an aggresive optimization of the operators and the resulting
library may not work with other graphs but will reduce the size of the
final library.
After the build is done you can check /tensorflow/tensorflow/core/framework/ops_to_register.h for operations that were registered. (autogenerated during build with -g flag)
Some progress: having realized the unregistered DecodeWav error is similar to the old familiar DecodeJpeg issue (#2883), I ran strip_unused on the pb as follows:
bazel-bin/tensorflow/python/tools/strip_unused \
--input_graph=/tf_files/speech_commands_graph.pb \
--output_graph=/tf_files/stripped_speech_commands_graph.pb \
--input_node_names=wav_data,decoded_sample_data \
--output_node_names=labels_softmax \
--input_binary=true
It does get rid of the DecodeWav op in the resulting graph. But running the new stripped graph on iOS now gives me an Op type not registered 'AudioSpectrogram' error.
Also there's no object file audio*.o generated after build_all_ios.sh is done, although AudioSpectrogramOp is specified in tensorflow/core/framework/ops_to_register.h:
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$ find . -name decode*.o
./tensorflow/contrib/makefile/gen/obj/ios_ARM64/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARM64/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7S/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_ARMV7S/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_I386/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_I386/tensorflow/core/kernels/decode_wav_op.o
./tensorflow/contrib/makefile/gen/obj/ios_X86_64/tensorflow/core/kernels/decode_bmp_op.o
./tensorflow/contrib/makefile/gen/obj/ios_X86_64/tensorflow/core/kernels/decode_wav_op.o
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$ find . -name audio*_op.o
Jeffs-MacBook-Pro:tensorflow-1.4.0 zero2one$
Just verified that Pete's fix (https://github.com/tensorflow/tensorflow/issues/15921) is good:
add this line tensorflow/core/ops/audio_ops.cc to the file tensorflow/contrib/makefile/tf_op_files.txt and run tensorflow/contrib/makefile/build_all_ios.sh again (compile_ios_tensorflow.sh "-O3" itself used to work for me after adding a line to the tf_op_files.txt, but not anymore with TF 1.4).
Also, use the original model file, don't use the stripped version. Some note was added in the link above.

How can I enable clang-tidy's "modernize" checks?

I just installed ClangOnWin,and I'm trying to get clang-tidy's "modernize" checks to work. Unfortunately, clang-tidy doesn't seem to know about them: clang-tidy -list-checks foo.cpp -- | grep modernize produces no output.
The "modernize" checks are listed here, but that page seems to document Clang 3.8, and the version I have installed is 3.7. However, version 3.7 is the current one listed at the LLVM Download Page.
clang-tidy knows about a variety of security checks, so I think I have it installed correctly. For example, clang-tidy -list-checks foo.cpp -- | grep security yields this:
clang-analyzer-security.FloatLoopCounter
clang-analyzer-security.insecureAPI.UncheckedReturn
clang-analyzer-security.insecureAPI.getpw
clang-analyzer-security.insecureAPI.gets
clang-analyzer-security.insecureAPI.mkstemp
clang-analyzer-security.insecureAPI.mktemp
clang-analyzer-security.insecureAPI.rand
clang-analyzer-security.insecureAPI.strcpy
clang-analyzer-security.insecureAPI.vfork
Is there something special I need to do to enable checks such as modernize-use-override and modernize-use-nullptr?
The modernize checks were added after 3.7 (ported from clang-modernize), but try adding -checks="*" to see the whole list of available checks.
clang-tidy -list-checks -checks="*" foo.cpp --
Have you tried with the official binaries from LLVM: http://llvm.org/releases/download.html ? Maybe the ClangOnWin binaries are not compiled with all options, or something of that kind.

Resources