Yulewalker with GNU Octave - signal-processing

I'm trying to design yulewalk filters.
MATLAB has a yulewalk function:
[b,a] = yulewalk(n,f,m)
Octave's yulewalker, however, is quite different:
[a, v] = yulewalker (c)
The documentation at https://octave.org/doc/interpreter/Signal-Processing.html (all the way to the bottom) is not too helpful...
I have all data needed by MATLAB, but I'm guessing I need to further process it somehow in order to feed it to Octave. Anyone knows how? Does someone have an example?

Link to the yulewalk function : http://read.pudn.com/downloads48/sourcecode/math/163974/yulewalk.m__.htm
Make sure you run 'pkgload signal' before using the function in octave.

Related

Modifying loss function faster rcnn detectron

For my thesis I am trying to modify the loss function of faster-rcnn with regards to recognizing table structures.
Currently I am using Facebooks Detectron. Seems to be working great but I am now actively trying to modify the loss function. Debugging my code I notice this is where the loss functions are added fast_rcnn_heads.py:75:
def add_fast_rcnn_losses(model):
"""Add losses for RoI classification and bounding box regression."""
cls_prob, loss_cls = model.net.SoftmaxWithLoss(
['cls_score', 'labels_int32'], ['cls_prob', 'loss_cls'],
scale=model.GetLossScale()
)
loss_bbox = model.net.SmoothL1Loss(
[
'bbox_pred', 'bbox_targets', 'bbox_inside_weights',
'bbox_outside_weights'
],
'loss_bbox',
scale=model.GetLossScale()
)
loss_gradients = blob_utils.get_loss_gradients(model, [loss_cls, loss_bbox])
model.Accuracy(['cls_prob', 'labels_int32'], 'accuracy_cls')
model.AddLosses(['loss_cls', 'loss_bbox'])
model.AddMetrics('accuracy_cls')
return loss_gradients
The debugger cant find any declaration or implementation of mode.net.SmoothL1Loss nor SoftmaxWithLoss. Detectron uses caffe, and when I look in the net_builder (which inits the model.net) I see it makes "binds"(dont know the proper word) to caffe2, which on itself is a pylib with a compiled lib behind it.
Am I looking in the wrong place to make a minor adjustment to this loss function, or will I really have to open de source from dcaffe, adjust the loss, recompile the lib?
Greets,
You should implement loss function by yourself. Modifying library source code and recompile it - isn't very good idea :)
You can create python function, that will take GT and predicted data and return loss value.
Also you can create a duplicate of L1-smooth or Cross-entropy, which is currently used and then, when you will make sure, that they are the same, you can modify them. Or you can implement, for example, L2 loss for boxes and use it instead.
More information about custom losses you can find in caffee documentation.

convert a normal map to height map with imagemagick: how to recreate/integrate following algorithm

I'm trying to convert a tangent space normal map to a height/displacement map. For sure this will be not 100% accurate speaking in terms of "exact height" for each pixel. But the relative height from each pixel to the next is more than enough.
Available Algorithm + Info's:
http://www.cournia.com/devnull/n2h/n2h.pdf
Questions:
1. How to convert a normal-to-height map in Photoshop/Gimp? Is there a way using these tools? Beside; I don't wan't to use CrazyBump or any other Texture-Tools. This has to run via CL later on. A Photoshop solution is more or less just a pre-step to understand workflow a bit better.
If not possible with PS/Gimp; how to include the algorithm in an imagemagick process?
I've checked already Doom3:-Normal2Height; Crazybump & all other texture tools like Nvidia's PS-Plugin, xNormal, Awesomebump, SSBump, etc. I'd need this function working with Imagemagick.
Any help is very much welcome. Python preferable.
thx
There are a couple of possibilities for doing that with ImageMagick.
Firstly, you could implement your own process module. When running configure to install ImageMagick, you would then do:
./configure --with-modules=yes
Then, when you want to apply your bumpmap processing on the command-line, you would do:
convert input.png -process analyse <param1> <param2> result.png
Your processing needs to be written in C/C++ and the best description I know of is on Alan Gibson's webpages here.
Secondly, you could write your entire processing using Magick++ which is the C++ binding to ImageMagick. Best description I know of is here with sample code here.

Original paper for DisparityWLSFilter in openCV?

I am working on post processing of disparity map.
My disparity image, even though it is WLS filtered, has too many 'holes'.
This is what i get for now. Rectified, but in fish eye way. Anyway rectified for sure, but have many holes. Disparity matching algorithm is SGBM. WLS filter sigma is 2.1, lambda is 30000. Black regions are holes.
I am referring official opencv site which says Disparity map post-filtering and it is using DisparityWLSFilter extensively. But I wonder how it works internally and want to read theoretical paper regarding this implementation. I want to know what Sigma and Lambda does, and how it will filter my image.
And, is there any other good disparity filter that i can use? WLS filter cannot fill the 'holes' effectively. Or, any algorithm that is easy to use or easy to implement, or library that is not GPL?
Self reply.
Got answer from Opencv.
Orig question is HERE.
Reply says
References have been added here, documentation reference
cc #sbokov
—
You are receiving this because you authored the thread.
Reply to this email directly or view it on GitHub
Check out the comments here, and the code here. That should answer some of your questions. To see how the code author has come up with this method perhaps should contact him directly as there is no reference for that in the code comments.

Graph Cuts on OpenCV

I'm trying to use the cvFindStereoCorrespondenceGC() function on opencv for the implementation of the graph cuts algorithm to find more accurate disparities than when using BM. I don't have this function for some reason; did they get rid of it in opencv 2.4.5? How else can I implement graph cuts? Thanks.
Yes!The documentation of this is not available .If you want to implement in python using opencv,here is the link.

Can someone explain the parameters of OpenCV Stitcher?

I'm trying to reduce the calculation time of my stitching algorithm. I got some images which I want to stitch in a defined order but it seems like cv::stitcher.stitch() function tries to stitch every image with every other image.
I feel like I might find the solution in the parameters of OpenCV Stitcher. If not maybe I have to modify the function or try something else to reduce calculation time. But since I'm pretty much a beginner, I don't know how. I know that using GPU might be a possibility but I just don't get CUDA running on Ubuntu at the moment.
It would be great if you could give me some advice!
Parameters for OpenCV Stitcher module:
Stitcher Stitcher::createDefault(bool try_use_gpu) {
Stitcher stitcher;
stitcher.setRegistrationResol(0.6);
stitcher.setSeamEstimationResol(0.1);
stitcher.setCompositingResol(ORIG_RESOL);
stitcher.setPanoConfidenceThresh(1);
stitcher.setWaveCorrection(true);
stitcher.setWaveCorrectKind(detail::WAVE_CORRECT_HORIZ);
stitcher.setFeaturesMatcher(new detail::BestOf2NearestMatcher(try_use_gpu));
stitcher.setBundleAdjuster(new detail::BundleAdjusterRay());
from stitcher.cpp:
https://code.ros.org/trac/opencv/browser/trunk/opencv/modules/stitching/src/stitcher.cpp?rev=7244
I want to stitch in a defined order but it seems like
cv::stitcher.stitch() function tries to stitch every image with every
other image.
cv::stitcher does not have a parameter to fulfil your requirement.
However, in the stitching_detailed.cpp sample you have the --rangewidth parameter. By setting it to 1, the algorithm will only consider adjacent image pairs (e.g. for pair 1-2 matches would be computed but not for pair 1-3)

Resources