How do I solve Euler Lagrange equation for image de-blurring? - image-processing

This is one of the two Euler Lagrange equations for de-blurring which I need to solve:
http://i.stack.imgur.com/AtCLZ.jpg
u_r is the reference image, which is known. u_0 is the original blurred image. k is the blurring kernel, unknown. lambda is a constant which is also known, or rather empirically determined. I need to solve this equation for k.
I tried solving it from the Fourier domain, but the results were disappointing due to some reason. The output image did not look much different from the input image, but pixel level difference of 2 or 3 gray-scale levels were there.
In all the papers that I found, they say that they solved the equation in code, using lagged diffusivity to make it linear and then using the conjugate gradient or fixed point method. But I can't get my head around this, because the kernel k which is being convolved with image u_r, is an unknown. How do I implement it then, in code? I can't if the unknown k is in a convolution.

Related

How to calculate correlation of colours in a dataset?

In this Distill article (https://distill.pub/2017/feature-visualization/) in footnote 8 authors write:
The Fourier transforms decorrelates spatially, but a correlation will still exist
between colors. To address this, we explicitly measure the correlation between colors
in the training set and use a Cholesky decomposition to decorrelate them.
I have trouble understanding how to do that. I understand that for an arbitrary image I can calculate a correlation matrix by interpreting the image's shape as [channels, width*height] instead of [channels, height, width]. But how to take the whole dataset into account? It can be averaged over, but that doesn't have anything to do with Cholesky decomposition.
Inspecting the code confuses me even more (https://github.com/tensorflow/lucid/blob/master/lucid/optvis/param/color.py#L24). There's no code for calculating correlations, but there's a hard-coded version of the matrix (and the decorrelation happens by matrix multiplication with this matrix). The matrix is named color_correlation_svd_sqrt, which has svd inside of it, and SVD wasn't mentioned anywhere else. Also the matrix there is non-triangular, which means that it hasn't come from the Cholesky decomposition.
Clarifications on any points I've mentioned would be greatly appreciated.
I figured out the answer to your question here: How to calculate the 3x3 covariance matrix for RGB values across an image dataset?
In short, you calculate the RGB covariance matrix for the image dataset and then do the following calculations
U,S,V = torch.svd(dataset_rgb_cov_matrix)
epsilon = 1e-10
svd_sqrt = U # torch.diag(torch.sqrt(S + epsilon))

Where does the forward radial distortion equation come from in camera calibration?

I'm reading Zhang's camera calibration paper and came across the following two equations.
I don't have journal access to [2, 25] so I can only guess where these formulas were derived. The term radial distortion suggests to me that the equation of a circle
$r^2 = (x-h)^2 + (y-k)^2$
is being employed somewhere.
But I don't quite understand where those polynomials come from exactly.
I'm also confused about what they mean by "normalization". I believe this means dividing by the focal length (in pixels) to create a unitless quantity?
Thank you for any clarification.

Map one camera's colour profile to another

I have two cameras (A & B), which I've taken photos of a calibration scene then corrected distortion and used feature mapping to get a pixel precise registration resulting in the following:
As you can see, the colour response is quite different. What I would like to do now is take a new photo with A and answer the question: what would it look like if instead I had used camera B?
Is there some existing technique or algorithm to convert between the colour spaces/profiles of two cameras like this?
From the image you provided it is not to hard to segment them to small squares. After that take the mean(or even better median) of each square in the both images. Now you have 2*m*n value which are as follow: MeansReference_(m*n) , MeansQuery_(m*n). Using the linear color correction matrix which is:
You can construct this linear system:
MeansReference[i][j]= C * MeansQuery[i][j]
Where:
MeansReference[i][j] is a vector (3*1) of the color (R,G,B) of the square [i,j] in the Reference image.
MeansQuery[i][j] is a vector (3*1) of the color (R,G,B) of the square [i,j] in the Query image.
C is the 3*3 Matrix (a11,a12,... ,a33)
Now, For each i,j you will got 3 linear equations (for R,G,B). Since there are 9 variables (a11...a33) you need at least 9 equations which mean at least 3 squares (each square provide you with 3 equation). However, the more equation you construct, the more accuracy you got.
How to solve linear system with the number of equations more than the number of variables? Use Batch-LSE. You can find great details about it in Neuro-Fuzzy-and-Soft-Computing-Jang-Sun-Mizutan book or any online source.
After you find the 9 variables you have a color correction matrix. Just apply it on any image from the new camera and you will got an image that look like it was taken by the old camera. If you want the opposite, apply C^-1 instead.
Good Luck!

1D discrete denoising of image by variational method (the length of smoothing term)

As of speaking about this 1D discrete denoising via variational calculus I would like to know how to manipulate the length of smoothing term as long as it should be N-1, while the length of data term is N. Here the equation:
E=0;
for i=1:n
E+=(u(i)-f(i))^2 + lambda*(u[i+1]-n[i])
E is the cost of actual u in optimization process
f is given image (noised)
u is output image (denoised)
n is the length of 1D vector.
lambda>=0 is weight of smoothness in optimization process (described around 13 minute in video)
here the length of second term and first term mismatch. How to resolve this?
More importantly, I would like to use linear equation system to solve this problem.
This is nowhere near my cup of tea but I think you are referring to the fact that:
u[i+1]-n[i] is accessing the next pixel making the term work only on resolution 1 pixel smaller then original f image
In graphics and filtering is this usually resolved in 2 ways:
use default value for pixels outside image resolution
you can set default or neutral(for the process) color to those pixels (like black)
use color of the closest neighbor inside image resolution
interpolate the mising pixels (bilinear,bicubic...)
I think the first choice is not suitable for your denoising technique.
change the resolution of output image
Usually after some filtering techniques (via FIR,etc) the result is 1 pixel smaller then the input to resolve the missing data problem. In your case it looks like your resulting u image should be 1 pixel bigger then input image f while computing cost functions.
So either enlarge it via bullet #1 and when the optimization is done you can crop back to original size.
Or virtually crop the f one pixel down (just say n'=n-1) before computing cost function so you avoid access violations (and also you can restore back after the optimization...)

Using OpenCV fitEllipse() for circle fitting

Is it valid to use OpenCV fitEllipse for circle fitting.
fitEllipse() returns cv::RotatedRect how about averaging width and height to get fitted circle radius?
I think that the "validity" of using cv::fitEllipse for fitting circles depends on the precision you require for the fitting.
For example you can run your algorithm on a test set, fitting points with cv::fitEllipse and logging the length of the two axes of the ellipse, then have a look at the distributions of the ratio of two axes or at the difference between the major and the minor axis; you can find how much your supposed circles differ from a circle and then asses if you can use the cv::fitEllipse.
You can take the average of the width and the height of the cv::RotatedRect returned by cv::fitEllipse to get an approximation of the diameter of the circle (you wrote the radius but I think it was a trivial error).
You can have a look at this very readable article
UMBACH, Dale; JONES, Kerry N. A few methods for fitting circles to data. Instrumentation and Measurement, IEEE Transactions on, 2003, 52.6: 1881-1885. and write your own circle interpolator.
If you want to minimize the geometric error (the sum of the squares of the distances from the points to the circle, as explained in the Introduction of the article) you maybe need a reliable implementation of a non linear minimization algorithm.
Otherwise you can write a simple circle interpolator with the formulae from (II.8) to (II.15) (a closed-form solution wich minimize an error different from the geometric one) with some warning:
from an implementation point of view you have to take care of the usually warnings about roundoff error and truncation error.
the closed form solution cannot be robust enough in case of outlier points, in that case you may need to implement a robust interpolator like RANSAC (random choose three points, interpolate a circle with that three points with formulae from (25) to (34) from Weisstein, Eric W. "Circle." From MathWorld--A Wolfram Web Resource, compute the consensus and iterate). This warning applies also to the circle found with the minimization of the geometric error.
There is a function for circle fitting: minEnclosingCircle

Resources