how to optimize a double integral with an unknown in python - scipy-optimize-minimize

I want to maximize this
function numerically in python using scipy.optimize.minimize and scipy.integrate.dblquad . Help?

Related

Fastest way to compute cosine similarity in a GPU

So I have a huge tfidf matrix with more than a million records, I would like to find the cosine similarity of this matrix with itself. I am using colab to run the code, but I am not sure how to best make use of the gpu provided by colab.
sequentially run code -
tfidf_matrix = tf.fit_transform(df['categories'])
cosine_similarities = linear_kernel(matrix, matrix)
Is there way we can parallelise the code using jit or any other way?
try simple torch code like in this example from sentence transformers library: https://github.com/UKPLab/sentence-transformers/blob/master/sentence_transformers/util.py#L31
or just import the function.
consider cuml library which uses CUDA acceleration
https://docs.rapids.ai/api/cuml/nightly/api.html

Scilab Error: Mean, Variance not executing

I1 is an rgb image. 'Out' variable basically stores one colour channel of the whole image.
The in-built functions mean, variance and standard deviation when calculated on 'out' gives an error asking for a real vector or matrix as input.
This can be seen in image given below
But when min or max is used, no error is reported.But these in-built function take in the same parameters as mentioned in the Scilab documentation which is of type vector or matrix of integers.
On further examination, it seems that variable 'out' is of type matrix of graphic handles when it should be a matrix of integers.
I can't seem to understand why the error is coming if it works for min and max functions ?
How can I solve this problem?
The output of imread() is a hypermatrix of integers, not of floating point numbers.
This is shown by the fact that min(out) is displayed as "4" (without decimal point), not as "4."
Now, mean() and stdev() do not work with integers, only with real or complex numbers.
The solution is to convert integers into decimal numbers:
mean(double(out))
https://help.scilab.org/docs/6.1.1/en_US/double.html

How to compute the 99% percent quantile of a normal variable: Scilab

I am trying to write a function that computes the quantile of the normal distribution using the function cdfnor.
for example
alpha= cdfnor("PQ",x,0,1)
anyone could help me to derive from this function the 99 percent quantile for example. how should I define the x?
I think the perctl function is what you are looking for...

Is there any equivalent function in the opencv gpu namespace to the function cvInvert from the cv namespace?

I'm trying to port an openCV application from the cv to the gpu namespace to take advantage of GPU optimizations and I can't find an equivalent function to cvInvert in the docs. Could you please tell me if such a function exists?
Opencv does not have an equivalent GPU invert function.
It would be in the gpu operations on matrices page but that page does not contain any functions that invert matrices.

Using my own kernel in libsvm

I am currently developing my own kernel to use for classification and want to include it into libsvm, replacing the standard kernels that libsvm offers.
I however am not 100% sure how to do this, and obviously do not want to make any mistakes. Beware, that my c++ is not very good. I found the following on the libsvm faq-page:
Q: I would like to use my own kernel. Any example? In svm.cpp, there
are two subroutines for kernel evaluations: k_function() and
kernel_function(). Which one should I modify ? An example is "LIBSVM
for string data" in LIBSVM Tools.
The reason why we have two functions is as follows. For the RBF kernel
exp(-g |xi - xj|^2), if we calculate xi - xj first and then the norm
square, there are 3n operations. Thus we consider exp(-g (|xi|^2 -
2dot(xi,xj) +|xj|^2)) and by calculating all |xi|^2 in the beginning,
the number of operations is reduced to 2n. This is for the training.
For prediction we cannot do this so a regular subroutine using that 3n
operations is needed. The easiest way to have your own kernel is to
put the same code in these two subroutines by replacing any kernel.
Hence, I was trying to find the two subroutinges k_function() and kernel_function(). The former I found with the following signature in svm.cpp:
double Kernel::k_function(const svm_node *x, const svm_node *y,
const svm_parameter& param)
Am I correct, that x and y each store one observation (=row) of my feature matrix in an array and I need to return the kernel value k(x,y)?
The function kernel_function() on the other hand I was not able to find at all. There is a pointer in the Kernel class with that name and the following declaration
double (Kernel::*kernel_function)(int i, int j) const;
which is set in the Kernel constructor. What are i and j in that case? I suppose I need to set this pointer as well?
Once I overwrote Kernel::k_function and Kernel::*kernel_function I'd be finished, and libsvm would use my kernel to compare two observations?
Thank you!
You don't have to break into the code of LIBSVM to use your own kernel, you can use the pre-computed kernel option (i.e., -t 4 training_set_file).
Thus, you can compute the kernel matrix externally as it suits you, store the values in a file and load the pre-computed kernel to LIBSVM. There's an explanation accompanied with an example of how to do this in the README file that you can find in LIBSVM tar ball (see in the Precomputed Kernels section line 236).

Resources