How is defined the Freak descriptor pattern - opencv

I'm have to code my own implementation of FREAK descriptor for a homework. I actually read the original paper but there isn't any explanation of how the build the pattern used.
In the OpenCV code is defined the buildPattern() function but it also lack of documentation on how the pattern itself is build.
So my question is, does anybody knows how the pattern is defined and how the parameters (radius, sigmas and coordinates) are selected?

It looks like the exact values aren't important, but Figure 4 shows the rough layout of the 43 receptive fields.
Their exact geometry is defined by the code here: https://github.com/kikohs/freak/blob/master/src/freak.cpp#L212

Related

Signed distance queries between shapes 'Box' and 'Box' are not supported

We tried solving a static equilibrium problem between two boxes:
static_equilibrium_problem = StaticEquilibriumProblem(autodiff_plant, autodiff_plant.GetMyContextFromRoot(autodiff_context), set())
result = Solve(static_equilibrium_problem.prog())
And got this error:
RuntimeError: Signed distance queries between shapes 'Box' and 'Box' are not supported for scalar type drake::AutoDiffXd
Is there more information about why this doesn't work, and how to extend the Static Equilibrium Problem to more general boxes and even meshes?
My guess is the SDF collision query between boxes is not differentiable for some reason, although it works for spheres: https://drake.mit.edu/doxygen_cxx/classdrake_1_1multibody_1_1_static_equilibrium_problem.html
The primary reason is the discontinuity in the derivatives. We haven't decided what we want to do about it. While SDF is continuous, it's gradient isn't inside the box. I'd recommend posting an issue on Drake with your use case, and what kind of results you'd expect given the mathematical properties of SDF for non-smooth geometries (e.g., boxes, general meshes, etc.). The input will help us make decisions and implicitly increase the importance of resolving the known open issue. It may be that you have no problems with the gradient discontinuity.
You'll note this behavior has been documented for the query. However, you'll note for the mathematically related ComputePointPairPenetration() method, we have additional support for AutoDiffXd (as documented here).
But the issue is your best path forward -- we've introduced some of this functionality based on demonstrable need; you seem to have that.

How to recover a valuation from a satifsiable formula, a question about model

I'm using Z3 with the ml interface. I had created a formula
f(x_i)
that is satisfiable, according to the solver
Solver.mk_simple_solver ctxr.
The problem is: I can get a model, but he find me values only for some variables of the formula, and not all (some of my Model.get_const_interp_er end with a type None)
How can it be possible that the model can give me only a part of the x_ir? In my understanding, if the model work for one of the values, it means that the formula was satisfiable (in my case, it is) and so all the values can be given...
I don't understand something..
Thanks for reading me!
You should always post full examples so people can help with actual coding issues; without seeing your actual code, it's impossible to know what might be the actual reason.
Having said that, this sounds very much like the following question: Why Z3Py does not provide all possible solutions So, perhaps the answer given there will help you.
Long story short: Z3 models will only contain values for variables that matter for the model. For anything that is not explicitly assigned, any value will do. There are ways to get "full" models as explained in that answer of course; which I'm sure is also possible from the ML interface.

DL4J - When using a ComputationGraph, is it possible to get the Class labels from it?

I saw how to do this from a DataSet object, and I saw a setLabel method, and I saw a getLabelMaskArrays, but none of these are what I'm looking for.
Am I just blind or is there not a way?
Thanks
Masking is for variable length time series in RNNs. Most of the time you don't need it. Our built in sequence dataset iterators also tend to handle these cases. For more details see our rnn page: https://deeplearning4j.org/usingrnns

understand use of nn.Identity() in model definition using torch

I was understanding the CGAN model given here.
The generative model has symmetric skip connections as it is explained in the paper here. Hence, I understand the lines such as:
d2 = {d2_,e4} - nn.CAddTable(true)
However, instead of doing the same thing after the last deconv layer d6, the following thing is done:
d6 = d61 - nn.Identity()
Can someone please help me understand why nn.Identity() is used here?
nn.Identity() is a module that forwards the input as such. It could be skipped in their code. Nevertheless it seems that they aren't implementing the model they described in figure 3 of their paper. Maybe it performs better without the third skip connection.
nn.Identity() is a placeholder identity operator.

Search for partially matching pattern in an image

Consider the following problem of locating a 2d pattern inside an image (0-255).
A match is said to be found at (x, y) if most of the elements of the bigger matrix (say > 50%) are in some range of the respective elements of the smaller matrix i.e
0.8*small[i][j] <= bigger[x+i][y+j] <= 1.2*smaller[i][j]
I remember this problem to be a standard problem in image searching, but couldn't recollect, neither find the exact name.
I would be very grateful if someone could figure out the name of an equivalent standard problem.
Thank you in advance.
I thought it might have been something like "moving windows" or something, so that's what I looked for. Thinking of the right name can be tricky, and with so many similar methods, finding the actual one you want can get hard. Glad I could help you out.
Anyway, it's template matching.
In the context of video compression (as opposed to image recognition) this is called block matching: http://en.wikipedia.org/wiki/Block-matching_algorithm

Resources