Manipulate maxima output to show numerical fractions before variable - maxima

Say I have the following polynomial:
(3/4) * x^2 - (1/2) *x + 3/4;
Maxima output (also the tex output) would show this as:
(%i1) (3/4) * x^2 - (1/2) *x + 3/4;
2
3 x x 3
(%o1) ---- - - + -
4 2 4
But I want the fractions before the variables. Like this:
Is that possible?

Try pfeformat:true; Does that help?

Related

How to maintain standard form with binomials such as -x + 1?

I'm using Maxima to print stuff into a web page in an educational context. Most of the time everything works smoothly, but I'm having problems with binomials such as -x + 1, which Maxima simplifies as 1 - x. I quess that this is because this form takes less characters.
But in educational context this is often not desired, as in the standard form the term with x should always be before constant.
So is there any option to prevent this kind of simplification?
I have tried using ratvars(x), totaldisrep(-x+1) and declare(x,mainvar), none of those did what I was looking for.
Outputs:
x + 1;
x+1
x - 1;
x-1
-x - 1;
-x-1
-x + 1;
1-x
I wish to find a way to get this last one to output -x+1.
Try setting negsumdispflag to false. See: ? negsumdispflag
I get these results, maybe this is acceptable.
(%i12) negsumdispflag:false;
(%o12) false
(%i13) x + 1;
(%o13) x + 1
(%i14) x - 1;
(%o14) x - 1
(%i15) 1 - x;
(%o15) (- x) + 1
(%i16) - 1 - x;
(%o16) (- x) - 1

OpenCV bilinear downsampling with non-integer scaling

I need help in figuring out the algorithm/implementation OpenCV is using for image-downsampling with non-linear scaling factors.
I know the question was already ask a few times, but most answers seem to not match OpenCV's implementation (for instance, this answer is not correct when using OpenCV: https://math.stackexchange.com/questions/48903/2d-array-downsampling-and-upsampling-using-bilinear-interpolation).
Minimal problem formulation:
I want to downsample an image of resolution 4x4 to an image of resolution 3x3 using bilinear interpolation. I am interested in the interpolation coefficients.
Example in python:
img = np.asarray([[ 1, 2, 3, 4],
[ 5, 6, 7, 8],
[ 9, 10, 11, 12],
[13, 14, 15, 16]]).astype(np.float32)
img_resized = cv2.resize(img, (3, 3), 0, 0, cv2.INTER_LINEAR).astype(np.float32)
print(img)
# [[ 1. 2. 3. 4.]
# [ 5. 6. 7. 8.]
# [ 9. 10. 11. 12.]
# [13. 14. 15. 16.]]
print(img_resized)
# [[ 1.8333333 3.1666667 4.5 ]
# [ 7.166667 8.5 9.833333 ]
# [12.5 13.833333 15.166666 ]]
Interpolation coefficients:
After a lot of trial-and-error, I figured out the interpolation coefficients OpenCV is using for this specific case.
For the corner points of the 3x3 image:
1.8333333 = 25/36 * 1 + 5/36 * 2 + 5/36 * 5 + 1/36 * 6
4.5000000 = 25/36 * 4 + 5/36 * 3 + 5/36 * 8 + 1/36 * 7
12.5000000 = 25/36 * 13 + 5/36 * 9 + 5/36 * 14 + 1/36 * 10
15.1666666 = 25/36 * 16 + 5/36 * 15 + 5/36 * 12 + 1/36 * 11
For the middle points of the 3x3 image:
8.5 = 1/4 * 6 + 1/4 * 7 + 1/4 * 10 + 1/4 * 11
For the remaining 4 points of the 3x3 image:
3.1666667 = 5/12 * 2 + 5/12 * 3 + 1/12 * 6 + 1/12 * 7
7.1666667 = 5/12 * 5 + 5/12 * 9 + 1/12 * 6 + 1/12 * 10
9.8333333 = 5/12 * 8 + 5/12 * 12 + 1/12 * 7 + 1/12 * 11
13.833333 = 5/12 * 14 + 5/12 * 15 + 1/12 * 10 + 1/12 * 11
Question:
Can someone please help me make sense of these interpolation coefficients? How are they calculated? I tried to read the source of the cv::resize() function, but it did not help me a lot :S
After playing around with various test cases, I think I know the answer to how OpenCV chooses the sample point locations. As #ChrisLuengo has pointed out in a comment, OpenCV seems to not apply a low-pass filter before downsampling, but uses (bi-)linear interpolation only.
(Possible) Solution:
Let's assume we have a 5x5 image, which pixel positions are represented with the blue circles in the graphic below. We now want to downsample it to a 3x3, or a 4x4 image, and need to find the sample positions of the new downsampled image in the original image grid.
It appears to be that OpenCV uses pixel distance of 1 for the original image grid, and a pixel distance of (OLD_SIZE / NEW_SIZE), thus here 5/3 and 5/4, for the new image grid. Moreover, it aligns both grids at the center point. Thus, OpenCV's deterministic sampling algorithms can be visualized as follows:
Visualization 5x5 to 3x3:
Visualization 5x5 to 4x4:
Sample Code (Python 2.7):
import numpy as np
import cv2
# 1. H_W is the height & width of the original image, using uniform H/W for this example
# resized_H_W is the height & width of the resized image, using uniform H/W for this example
H_W = 5
resized_H_W = 4
# 2. Create original image & Get OpenCV resized image:
img = np.zeros((H_W, H_W)).astype(np.float32)
counter = 1
for i in range(0, H_W):
for j in range(0, H_W):
img[i, j] = counter
counter += 1
img_resized_opencv = cv2.resize(img, (resized_H_W, resized_H_W), 0, 0, cv2.INTER_LINEAR).astype(np.float32)
# 3. Get own resized image:
img_resized_own = np.zeros((resized_H_W, resized_H_W)).astype(np.float32)
for i in range(0, resized_H_W):
for j in range(0, resized_H_W):
sample_x = (1.0 * H_W) / 2.0 - 0.50 + (i - (1.0 * resized_H_W - 1.0) / 2.0) * (1.0 * H_W) / (1.0 * resized_H_W)
sample_y = (1.0 * H_W) / 2.0 - 0.50 + (j - (1.0 * resized_H_W - 1.0) / 2.0) * (1.0 * H_W) / (1.0 * resized_H_W)
pixel_top_left = img[int(np.floor(sample_x)), int(np.floor(sample_y))]
pixel_top_right = img[int(np.floor(sample_x)), int(np.ceil(sample_y))]
pixel_bot_left = img[int(np.ceil(sample_x)), int(np.floor(sample_y))]
pixel_bot_right = img[int(np.ceil(sample_x)), int(np.ceil(sample_y))]
img_resized_own[i, j] = (1.0 - (sample_x - np.floor(sample_x))) * (1.0 - (sample_y - np.floor(sample_y))) * pixel_top_left + \
(1.0 - (sample_x - np.floor(sample_x))) * (sample_y - np.floor(sample_y)) * pixel_top_right + \
(sample_x - np.floor(sample_x)) * (1.0 - (sample_y - np.floor(sample_y))) * pixel_bot_left + \
(sample_x - np.floor(sample_x)) * (sample_y - np.floor(sample_y)) * pixel_bot_right
# 4. Print results:
print "\n"
print "Org. image: \n", img
print "\n"
print "Resized image (OpenCV): \n", img_resized_opencv
print "\n"
print "Resized image (own): \n", img_resized_own
print "\n"
print "MSE between OpenCV <-> Own: ", np.mean(np.square(img_resized_opencv - img_resized_own))
print "\n"
Disclaimer:
This is just my theory that I tested via ~10 test cases. I do not claim that this is 100% true.

display partial derivatives in maxima

I want to display the partial derivative df/dx of f(x,y) = ln(y-1-x^2)-xy^2.
A comparable example for what I want is:
(%i0) f(x) := x^8$
(%i1) diff(f(x),x);
(%o1) 8*x^7
I've tried:
(%i1) f(x,y):=ln(y-1-x^2)-xy^2$
(%i2) g(x,y):=(diff(f(x,y), x));
(%o2) g(x,y):='diff(f(x,y),x,1)
(%i3) g(x,y):=''(diff(f(x,y), x));
(%o3) g(x,y):='diff(ln(y-x^2-1),x,1)
But it doesn't work (the summand -xy^2 got deleted?).
I want the derivated function. Something like this:
(2*x)
g(x,y):= ——————————— - y^2
(1 + x^2 - y)
the problem with your funciotn is that xy^2 instead of x*y^2for maxima is a variable you should write it as follows:
(%i5) f(x,y):=ln(y-1-x^2)-x*y^2;
(\%o5) f\left(x , \linebreak[0]y\right):=\mathrm{ln}\left(y-1-x^2\right)-x\,y^2
(%i6) diff(f(x,y), x);
(\%o6) \ifracd{d}{d\,x}\,\mathrm{ln}\left(y-x^2-1\right)-y^2

Estimation of Surface Normal in a Dense Range Image

I am trying to implement the surface normal estimation proposed by Hinterstoisser et al (2011) but I'm not clear with some points:
In equation (9), is D(x) corresponding to the depth value (Z-axis) at pixel location x?
How to estimate the value of the gradient ▽D using 8 neighboring points around the point of interest?
As mentioned, D is a dense range image meaning that for any pixel location x in D where x = [x y]T, D(x) is the depth at pixel location x (or simply D(x, y)).
Estimating the optimal Gradient in a least-square sense
Suppose we have the following neighborhood around the depth value 5 in D(x) for some x:
8 1 6
3 5 7
4 9 2
Then, using the Taylor expansion
dxT.grad(x) + error = D(x + dx) - D(x)
we get eight equations for the neighborhood points
[1 0]g + e = 7 - 5
[-1 0]g + e = 3 - 5
[0 1]g + e = 9 - 5
[0 -1]g + e = 1 - 5
[1 1]g + e = 2 - 5
[1 -1]g + e = 6 - 5
[-1 1]g + e = 4 - 5
[-1 -1]g + e = 8 - 5
that we can represent in matrix form Ag + e = b as
[1 0;-1 0;0 1;0 -1;1 1;1 -1;-1 1;-1 -1]g + e= [2;-2;4;-4;-3;1;-1;3]
Then minimize the squared error
||Ag - b||22. The analytical solution for g^ that minimizes this error is of the form
g^ = (ATA)-1ATb

using wxMaxima to factor a polynomial

Suppose that I define some function, then make a change of variable and expand, as in the following lines:
declare(a,real); declare(k,real); declare(z,real);
myFun(a,k,z):=(1-1/2*((k-a)/2)^2)*z - 1 + 1/2* ((k+3*a)/2)^2;
myFun(a,k,z),simp,a=x0+x1*k;
expand(%);
What I would like to do now is to obtain a polynomial in k, i.e. collect the terms in each power of k with one command so that it shows something like:
(...)k^2 + (...)k + (...)
declare(a,real); declare(k,real); declare(z,real);
myFun(a,k,z):=(1-1/2*((k-a)/2)^2)*z - 1 + 1/2* ((k+3*a)/2)^2;
myFun(a,k,z),simp,a=x0+x1*k;
P: expand(%);
rat(P, k);
gives
2 2 2
(%o7)/R/ - (((x1 - 2 x1 + 1) z - 9 x1 - 6 x1 - 1) k
2 2
+ ((2 x0 x1 - 2 x0) z - 18 x0 x1 - 6 x0) k + (x0 - 8) z - 9 x0 + 8)/8
coeff returns each of the coefficients
coeff(P, k^2);
2 2
x1 z x1 z z 9 x1 3 x1 1
(%o8) - ----- + ---- - - + ----- + ---- + -
8 4 8 8 4 8

Resources