Is there a way to get the final system of equations sent by cvxpy to the solver? - cvxpy

If I understand correctly, cvxpy converts our high-level problem description to the standard canonical form before it is sent to a solver.
By the standard form I mean the form that can be used for the descent algorithms, so, for instance, it would convert all the absolute values in the objective to be a difference of two positive numbers with some new constraints, etc.
Wondering if its possible to see what the reduction looked like for a problem I specify in cvxpy?
For instance, lets say I have the following problem:
import numpy as np
import cvxpy as cp
x = cp.Variable(2)
L = np.asarray([[1,2],[2,3]])
P = L.T # L
constraints = []
constraints.append(x >= [-10, -10])
constraints.append(x <= [10, 10])
obj = cp.Minimize(cp.quad_form(x, P) - [1, 2] * x)
prob = cp.Problem(obj, constraints)
prob.solve(), prob.solver_stats.solver_name
(-0.24999999999999453, 'OSQP')
So, I would like to see the actual arguments (P, q, A, l, u) being sent to the OSQP solver https://github.com/oxfordcontrol/osqp-python/blob/master/module/interface.py#L278
Any help is greatly appreciated!

From looking at the documentation, it seems you can do this using the command get_problem_data as follows:
data, chain, inverse_data = prob.get_problem_data(prob.solver_stats.solver_name)
I have not tried it, and it says it output depends on the particular solver and the solver chain, but it may help you!

Related

Efficient pseudo-inverse for PyTorch 2D convolution

Background:
Thanks for your attention! I am learning the basic knowledge of 2D convolution, linear algebra and PyTorch. I encounter the implementation problem about the psedo-inverse of the convolution operator. Specifically, I have no idea about how to implement it in an efficient way. Please see the following problem statements for details. Any help/tip/suggestion is welcomed.
(Thanks a lot for your attention!)
The Original Problem:
I have an image feature x with shape [b,c,h,w] and a 3x3 convolutional kernel K with shape [c,c,3,3]. There is y = K * x. How to implement the corresponding pseudo-inverse on y in an efficient way?
There is [y = K * x = Ax], how to implement [x_hat = (A^+)y]?
I guess that there should be some operations using torch.fft. However, I still have no idea about how to implement it. I do not know if there exists an implementation previously.
import torch
import torch.nn.functional as F
c = 32
K = torch.randn(c, c, 3, 3)
x = torch.randn(1, c, 128, 128)
y = F.conv2d(x, K, padding=1)
print(y.shape)
# How to implement pseudo-inverse for y = K * x in an efficient way?
Some of My Efforts:
I may know that the 2D convolution is a linear operator. It is equivalent to a "matrix product" operator. We can actually write out the matrix form of the convolution and calculate its psedo-inverse. However, I think this type of operation will be inefficient. And I have no idea about how to implement it in an efficient way.
According to Wikipedia, the psedo-inverse may satisfy the property of A(A_pinv(x))=x, where A is the convolutional operator, A_pinv is its psedo-inverse, and x may be any image feature.
(Thanks again for reading such a long post!)
This takes the problem to another level.
The convolution itself is a linear operation, you can determine the matrix of the operation and solve a least square problem directly [1], or compute the pseudo-inverse as you mentioned, and then apply to different outputs and predicting a projection of the input.
I am changing your code to using padding=0
import torch
import torch.nn.functional as F
# your code
c = 32
K = torch.randn(c, c, 1, 1)
x = torch.randn(4, c, 128, 128)
y = F.conv2d(x, K, bias=torch.zeros((c,)))
Also, as you probably already suggested the convolution can be computed as ifft(fft(h)*fft(x)). However, the conv2d function is a cross-correlation, so you have to conjugate the filter leading to ifft(fft(h)*fft(x)), also you have to apply this to two axes, and you have to make sure the FFT is calcuated using the same representation (size), since the data is real, we can apply multi-dimensional real FFT. To be complete, conv2d works on multiple channels, so we have to calculate summations of convolutions. Since the FFT is linear, we can simply compute the summations on the frequency domain
using einsum.
s = y.shape[-2:]
K_f = torch.fft.rfftn(K, s)
x_f = torch.fft.rfftn(x, s)
y_f = torch.einsum('jkxy,ikxy->ijxy', K_f.conj(), x_f)
y_hat = torch.fft.irfftn(y_f, s)
Except for the borders it should be accurate (remember FFT computes a cyclic convolution).
torch.max(abs(y_hat[:,:,:-2,:-2] - y[:,:,:,:]))
Now, notice the pattern jk,ik->ij on the einsum, that means y_f[i,j] = sum(K_f[j,k] * x_f[i,k]) = x_f # K_f.T, if # is the matrix product on the first two dimensions. So to invert this operation we have to can interpret the first two dimensions as matrices. The function pinv will compute pseudo-inverses on the last two axes, so in order to use that we have to permute the axes. If we right multiply the output by the pseudo-inverse of transposed K_f we should invert this operation.
s = 128,128
K_f = torch.fft.rfftn(K, s)
K_f_inv = torch.linalg.pinv(K_f.T).T
y_f = torch.fft.rfftn(y_hat, s)
x_f = torch.einsum('jkxy,ikxy->ijxy', K_f_inv.conj(), y_f)
x_hat = torch.fft.irfftn(x_f, s)
print(torch.mean((x - x_hat)**2) / torch.mean((x)**2))
Nottice that I am using the full convolution, but the conv2d actually cropped the images. Let's apply that
y_hat[:,:,128-(k-1):,:] = 0
y_hat[:,:,:,128-(k-1):] = 0
Repeating the calculation you will see that the input is not accurate anymore, so you have to be careful about what you do with your convolution, but in some situations where you can get this to work it will be in fact efficient.
s = 128,128
K_f = torch.fft.rfftn(K, s)
K_f_inv = torch.linalg.pinv(K_f.T).T
y_f = torch.fft.rfftn(y_hat, s)
x_f = torch.einsum('jkxy,ikxy->ijxy', K_f_inv.conj(), y_f)
x_hat = torch.fft.irfftn(x_f, s)
print(torch.mean((x - x_hat)**2) / torch.mean((x)**2))

Translating an optimization problem from CVX to CVXPY?

I am attempting to translate a semidefinite programming problem from CVX to CVXPY as described here. My attempt follows:
import cvxpy as cvx
import numpy as np
c = [0, 1]
n = len(c)
# Create optimization variables.
f = cvx.Variable((n, n), hermitian=True)
# Create constraints.
constraints = [f >> 0]
for k in range(1, n):
indices = [(i * n) + i - (n - k) for i in range(n - k, n)]
constraints += [cvx.sum(cvx.vec(f)[indices]) == c[n - k]]
# Form objective.
obj = cvx.Maximize(c[0] - cvx.trace(f))
# Form and solve problem.
prob = cvx.Problem(obj, constraints)
sol = prob.solve()
print(sol)
print(f.value)
The issue here is that when I take the coefficients of the Fourier series and translate them into the array c it fails on complex values. I think this is due to a discrepancy between the maximize function of CVX and CVXPY. I'm not sure what CVX is maximizing, since the trace of the matrix is a complex value. As pointed out below the trace is real since the matrix is Hermitian, but the code still fails. Can someone with CVXPY knowledge clear this up?

How tf.gradients work in TensorFlow

Given I have a linear model as the following I would like to get the gradient vector with regards to W and b.
# tf Graph Input
X = tf.placeholder("float")
Y = tf.placeholder("float")
# Set model weights
W = tf.Variable(rng.randn(), name="weight")
b = tf.Variable(rng.randn(), name="bias")
# Construct a linear model
pred = tf.add(tf.mul(X, W), b)
# Mean squared error
cost = tf.reduce_sum(tf.pow(pred-Y, 2))/(2*n_samples)
However if I try something like this where cost is a function of cost(x,y,w,b) and I only want to gradients with respect to w and b:
grads = tf.gradients(cost, tf.all_variable())
My placeholders will also be included (X and Y).
Even if I do get a gradient with [x,y,w,b] how do I know which element in the gradient that belong to each parameter since it is just a list without names to which parameter the derivative has be taken with regards to?
In this question I'm using parts of this code and I build on this question.
Quoting the docs for tf.gradients
Constructs symbolic partial derivatives of sum of ys w.r.t. x in xs.
So, this should work:
dc_dw, dc_db = tf.gradients(cost, [W, b])
Here, tf.gradients() returns the gradient of cost wrt each tensor in the second argument as a list in the same order.
Read tf.gradients for more information.

Minimize a equation using opencv

I need to solve the following equation:
I Know the matrix G, how can I find the the matrix p subject to ||p|| = 1.
Currently I am solving in opencv as follows:
Mat w, u, EigenVectors;
SVD::compute(A, w, u, EigenVectors);
Mat p = EigenVectors.row(EigenVectors.rows-1);
I want to know how can I ensure the condition ||p|| = 1.
Also I want to know the significance and meaning of other rows/cols of the EigenVectors(transposed) ?
I believe you can use cv::SVD::solveZ(). It finds a unit-length solution x of a singular linear system A * x = 0
Looks like you need to use Lagrange multipliers method.
As I know, OpenCV have no ready to use tools for that.
Good example for MATLAB: Lagrange Multipliers

Minimum and maximum values of integer variable

Let's assume a very simple constraint: solve(x > 0 && x < 5).
Can Z3 (or any other SMT solver, or any other automatic technique)
compute the minimum and maximum values of (integer) variable x that satisfies the given constraints?
In our case, the minimum is 1 and the maximum is 4.
Z3 has not support for optimizing (maximizing/minimizing) objective functions or variables.
We plan to add this kind of capability, but it will not happen this year.
In the current version, we can "optimize" an objective function by solving several problems where in each iteration we add additional constraints. We know we found the optimal when the problem becomes unsatisfiable. Here is a small Python script that illustrates the idea. The script maximizes the value of a variable X. For minimization, we just have to replace s.add(X > last_model[X]) with s.add(X < last_model[X]). This script is very naive, it performs a "linear search". It can be improved in many ways, but it demonstrates the basic idea.
You can also try the script online at: http://rise4fun.com/Z3Py/KI1
See the following related question: Determine upper/lower bound for variables in an arbitrary propositional formula
from z3 import *
# Given formula F, find the model the maximizes the value of X
# using at-most M iterations.
def max(F, X, M):
s = Solver()
s.add(F)
last_model = None
i = 0
while True:
r = s.check()
if r == unsat:
if last_model != None:
return last_model
else:
return unsat
if r == unknown:
raise Z3Exception("failed")
last_model = s.model()
s.add(X > last_model[X])
i = i + 1
if (i > M):
raise Z3Exception("maximum not found, maximum number of iterations was reached")
x, y = Ints('x y')
F = [x > 0, x < 10, x == 2*y]
print max(F, x, 10000)
As Leonardo pointed out, this was discussed in detail before: Determine upper/lower bound for variables in an arbitrary propositional formula. Also see: How to optimize a piece of code in Z3? (PI_NON_NESTED_ARITH_WEIGHT related).
To summarize, one can either use a quantified formula, or go iteratively. Unfortunately, these techniques are not equivalent:
Quantified approach needs no iteration, and can find global min/max in a single call to the solver; at least in theory. However, it does give rise to harder formulas. So, the backend solver can time-out, or simply return "unknown".
Iterative approach creates simple formulas for the backend solver to deal with, but it can loop forever if there's no optimal value; simplest example being trying to find the largest Int value. Quantified version can solve this problem nicely by quickly telling you that there is no such value, while the iterative version would go on indefinitely. This can be a problem if you don't know ahead of time that your constraints do have an optimal solution. (Needless to say, the "sufficient" iteration count is typically hard to guess, and might depend on random factors, like the seed used by the solver.)
Also keep in mind that if there is a custom optimization algorithm for the problem domain at hand, it's unlikely that a general purpose SMT solver can outperform it.
z3 now supports optimization.
from z3 import *
o = Optimize()
x = Int( 'x' )
o.add(And(x > 0, x < 5))
o.maximize(x)
print(o.check()) # prints sat
print(o.model()) # prints [x = 4]
This particular problem is an integer program.

Resources