Background
Opencv introduce Compute graph, which every Opencv operation can be describe as graph op code. They took it further and, introduces the ability to use inference on DNN module as on item in the graph ( in-graph inference).
An example
https://docs.opencv.org/4.0.0/d3/d7a/tutorial_gapi_anisotropic_segmentation.html
Question
Is it possible to use TensorRT as in-graph inference?
if not, how one should implement custom operation for inference on TensorRT?
sources:
https://github.com/opencv/opencv/wiki/files/2020-09-04-GAPI_Overview.pdf
https://github.com/opencv/opencv/wiki/Graph-AP
https://github.com/opencv/opencv/tree/4.x/modules/gapi
Related
What algorithm is used in series_decompose_forecast() function for predicting future values? Is it possible to change the algorithm?
As written in the doc, it's seasonal decomposition. There are few parameters to tweak the algorithm (all documented). If you want a totally different algorithm you can build it from more native functions like series_outliers, series_periods_detect etc., or import inline a Python/R code
I am currently using a CNN based object detection module which gives me objects which I then use as input for tracking using OpenCV. The object detection module produced rectangles until now but I want to shift to a segmentation module like Mask-RCNN which outputs masks along with rectangles for each object. Masks are a more accurate representation of an object. All the trackers in OpenCV take rectangles as input. Is there any way to use the masks for tracking an object rather than the boxes. I can convert the masks to contours if that will help me track the object.
Sorry, there is no built-in out-of-box solution in OpenCV for active contour models.
This segmentation model is widely used on computer vision problems (was proposed by Kass on 1988 and is the starting point for other segmentation model based on energies like level sets models, geodesic active contours or fuzzy-snake model.
So, trying to perform the active contour segmentation on OpenCV, there are several solution, but I think you must understand the mathematical model in order to be able to set the parameters properly according to the context of application.
There is a nice implementation (a bit obfuscated) by Eric Yuan
And others implementation from SO, that could help you to link between theory and implementation:
Solution 1
Solution 2
My advice:
Read the original paper to understand the parameters.
Test some examples on Matlab to play a bit with parameters and results.
Test some of the implementation using OpenCV that are linked here.
Determine the best parameters for you problem context and test them.
Think about contributing to OpenCV with you results.
active contours can track using contours as input. https://www.ee.iitb.ac.in/uma/~krishnan/research.html
So you initialize the first frame using contour from cnn model and in subsequent frames, you don't need to call the expensive forward but able to update the contour to a new one based on this model.
I'm trying to make a regression model with TensorFlow while using the sklearn implementation so it plays nicely with all the other models I've made. However I cannot seem to find a way to train the model with a custom score function (cost function or objective function).
Is this simply impossible with skflow?
Thanks loads!
Many of the examples uses learn.models.logistic_regression, which is basically a built-in high-level model that returns predictions and losses. For example, models.logistic_regression uses ops.losses_ops.softmax_classifier, which means you can look into how ops.losses_ops.softmax_classifier is implemented and implement your own loss function using perhaps TensorFlow low-level APIs.
Can OpenCV be used to calculate dense optical flow using Lucas Kanade method? I am aware of function in gpu/ocl module that can do that (gpu::PyrLKOpticalFlow::dense), but is there non-gpu equivalent of that function?
I'm also aware of Farneback and TV L1, but I need LK / pyramidal LK for my research.
No. Actually there is no good dense optical flow extraction method. I'm facing the same problem (particle advection on optical flow, right?)
There is a function that evaluates optical flow with Farneback method [1], but it gives me bad results. It does not use ocl nor gpu.
You may try with phaseCorrelate to extract it with a shift based algorithm. I've used this method. When I will upload it to github I'll give you the link.
[EDIT]
Here is the code. I've decided to separate the phase correlation algorithm from the whole project, to make it more simple to understand:
https://github.com/MatteoRagni/OpticalFlow
Please star it, if you intend to use it.
You can find the OpenCV non-gpu video analysis functionality documentation here
There is an implementation of the sparse iterative Lucas-Kanade method with pyramids (specifically from this paper). The function is called calcOpticalFlowPyrLK, and you build the associated pyramid(s) via buildOpticalFlowPyramid. Note however that it does specify that it's for sparse feature sets, so I don't know how much of a difference that'll make for you if you need dense optical flow.
I have to train a Support Vector Machine model and I'd like to use a custom kernel matrix, instead of the preset ones (like RBF, Poly, ecc.).
How can I do that (if is it possible) with opencv's machine learning library?
Thank you!
AFAICT, custom kernels for SVM aren't supported directly in OpenCV. It looks like LIBSVM, which is the underlying library that OpenCV uses for this, doesn't provide a particularly easy means of defining custom kernels. So, many of the wrappers that use LIBSVM don't provide this either. There seem to be a few, e.g. scikit for python: scikit example of SVM with custom kernel
You could also take a look at a completely different library, like SVMlight. It supports custom kernels directly. Also take a look at this SO question. The answers there include a handful of SVM libraries, along with brief reviews.
If you have compelling reasons to stay within OpenCV, you might be able to accomplish it by using kernel type CvSVM::LINEAR and applying your custom kernel to the data before training the SVM. I'm a little fuzzy on whether this direction would be fruitful, so I hope someone with more experience with SVM can chime in and comment. If it is possible to use a "precomputed kernel" by choosing "linear" as your kernel, then take a look at this answer for more ideas on how to proceed.
You might also consider including LIBSVM and calling it directly, without using OpenCV. See FAQ #418 for LIBSVM, which briefly touches on how to do custom kernels:
Q: I would like to use my own kernel. Any example? In svm.cpp, there are two subroutines for kernel evaluations: k_function() and kernel_function(). Which one should I modify ?
An example is "LIBSVM for string data" in LIBSVM Tools.
The reason why we have two functions is as follows. For the RBF kernel exp(-g |xi - xj|^2), if we calculate xi - xj first and then the norm square, there are 3n operations. Thus we consider exp(-g (|xi|^2 - 2dot(xi,xj) +|xj|^2)) and by calculating all |xi|^2 in the beginning, the number of operations is reduced to 2n. This is for the training. For prediction we cannot do this so a regular subroutine using that 3n operations is needed. The easiest way to have your own kernel is to put the same code in these two subroutines by replacing any kernel.
That last option sounds like a bit of a pain, though. I'd recommend scikit or SVMlight. Best of luck to you!
If you're not married to OpenCV for the SVM stuff, have a look at the shogun toolbox ... lots of SVM voodoo in there.