Fitting a image with 2d gaussian in Stan - image-processing

I’m trying to fit an image of (matrix of NxN) dimension, but I don’t figure out how to do it. I suppose that my model has to be a y ~multi_normal (mu, Sigma) with mu a vector of values mu_x and mu_y and Sigma a cov matrix.
I’m writing a simple Stan code, but i don’t know how to introduce the image to the multi_normal. An example of data could be:
np.array([[1 4 7 6],
[2 11 9 4],
[3 6 9 4],
[1 2 3 1]])
This code is bad but how can I implement it?
data {
int<lower=0> n; #dimensions of gaussian
int<lower=0> N; #dimensions of the image
matrix[N,N] x; #image
}
parameters {
vector[2] mu;
matrix <lower=0> cov;
}
model {
y ~ multi_normal( mu, cov);
mu ~ normal(0., 100.); #prior
cov ~ cauchy(0., 100.);

Related

How to keep input and output shape consistent after applying conv2d and convtranspose2d to image data?

I'm using Pytorch to experiment image segmentation task. I found input and output shape are often inconsistent after applying Conv2d() and Convtranspose2d() to my image data of shape [1,1,height,width]). How to fix it the issue for arbitrary height and width?
Best regards
import torch
data = torch.rand(1,1,16,26)
a = torch.nn.Conv2d(1,1,kernel_size=3, stride=2)
b = a(data)
print(b.shape)
c = torch.nn.ConvTranspose2d(1,1,kernel_size=3, stride=2)
d = c(b)
print(d.shape) # torch.Size([1, 1, 15, 25])
TLDR; Given the same parameters nn.ConvTranspose2d is not the invert operation of nn.Conv2d in terms of dimension shape conservation.
From an input with spatial dimension x_in, nn.Conv2d will output a tensor with respective spatial dimension x_out:
x_out = [(x_in + 2p - d*(k-1) - 1)/s + 1]
Where [.] is the whole part function, p the padding, d the dilation, k the kernel size, and s the stride.
In your case: k=3, s=2, while other parameters default to p=0 and d=1. In other words x_out = [(x_in - 3)/2 + 1]. So given x_in=16, you get x_out = [7.5] = 7.
On the other hand, we have for nn.ConvTranspose2d:
x_out = (x_in-1)*s - 2p + d*(k-1) + op + 1
Where [.] is the whole part function, p the padding, d the dilation, k the kernel size, s the stride, and op the output padding.
In your case: k=3, s=2, while other parameters default to p=0, d=1, and op=0. You get x_out = (x_in-1)*2 + 3. So given x_in=7, you get x_out = 15.
However, if you apply an output padding on your transpose convolution, you will get the desired shape:
>>> conv = nn.Conv2d(1,1, kernel_size=3, stride=2)
>>> convT = nn.ConvTranspose2d(1, 1, kernel_size=3, stride=2, output_padding=1)
>>> convT(conv(data)).shape
torch.Size([1, 1, 16, 26])

Batch wise batch normalization in TensorFlow

What is the correct way of performing batch wise batch normalization in TensorFlow? (I.e. I don't want to compute a running mean and variance). My current implementation is based on tf.nn.batch_normalization, where xis the output of a convolutional layer with shape [batch_size, width, height, num_channels]. I want to perform batch norm channel wise.
batch_mean, batch_var = tf.nn.moments(x, axes=[0, 1, 2])
x = tf.nn.batch_normalization(x, batch_mean, batch_var, offset=0, scale=0, variance_epsilon=1e-6)
But the results of this implementation are very bad. Comparison with tensorflow.contrib.slim.batch_norm shows that it is fare inferior (similarly bad training performance).
What am I doing wrong, and what can explain this bad performance?
You may consider tf.contrib.layers.layer_norm. You may want to reshape x to [batch, channel, width, height] and set begin_norm_axis=2 for channel wise normalization (each batch and each channel will be normalized independently).
Here is example how to reshape from your original order to [batch, channel, width, height]:
import tensorflow as tf
sess = tf.InteractiveSession()
batch = 2
height = 2
width = 2
channel = 3
tot_size = batch * height * channel * width
ts_4D_bhwc = tf.reshape(tf.range(tot_size), [batch, height, width, channel])
ts_4D_bchw = tf.transpose(ts_4D_bhwc, perm=[0,3,1,2])
print("Original tensor w/ order bhwc\n")
print(ts_4D_bhwc.eval())
print("\nTransormed tensor w/ order bchw\n")
print(ts_4D_bchw.eval())
Outputs:
Original tensor w/ order bhwc
[[[[ 0 1 2]
[ 3 4 5]]
[[ 6 7 8]
[ 9 10 11]]]
[[[12 13 14]
[15 16 17]]
[[18 19 20]
[21 22 23]]]]
Transormed tensor w/ order bchw
[[[[ 0 3]
[ 6 9]]
[[ 1 4]
[ 7 10]]
[[ 2 5]
[ 8 11]]]
[[[12 15]
[18 21]]
[[13 16]
[19 22]]
[[14 17]
[20 23]]]]
The solution by #Maosi works, but I found that it is slow. The following is simple and fast.
batch_mean, batch_var = tf.nn.moments(x, axes=[0, 1, 2])
x = tf.subtract(x, batch_mean)
x = tf.div(x, tf.sqrt(batch_var) + 1e-6)

ZCA whitening in python for Machine learning

I am training 1000 images of 28x28 size. But before training, I am performing ZCA whitening on my data by taking the reference from How to implement ZCA Whitening? Python.
Since I have 1000 data images of size 28x28, after flattening, it becomes 1000x784.
But as given in the code below, whether X is my image dataset of 1000x784?
If it is so, then it means the ZCAMatrix size is 1000x1000.
In this case, for prediction I have a image of size 28x28, or we can say, size of 1x784.So it doesn't make sense to multiply ZCAMatrix to the image.
So I think, X is the transpose of image data set. Am I right?
If I am right, then the size of ZCAMatrix is 784x784.
Now how should I calculate the ZCA whitened image, whether I should use np.dot(ZCAMatrix, transpose_of_image_to_be_predict) or np.dot(image_to_be_predict, ZCAMatrix)?
Suggestion would be greatly appreciate.
def zca_whitening_matrix(X):
"""
Function to compute ZCA whitening matrix (aka Mahalanobis whitening).
INPUT: X: [M x N] matrix.
Rows: Variables
Columns: Observations
OUTPUT: ZCAMatrix: [M x M] matrix
"""
# Covariance matrix [column-wise variables]: Sigma = (X-mu)' * (X-mu) / N
sigma = np.cov(X, rowvar=True) # [M x M]
# Singular Value Decomposition. X = U * np.diag(S) * V
U,S,V = np.linalg.svd(sigma)
# U: [M x M] eigenvectors of sigma.
# S: [M x 1] eigenvalues of sigma.
# V: [M x M] transpose of U
# Whitening constant: prevents division by zero
epsilon = 1e-5
# ZCA Whitening matrix: U * Lambda * U'
ZCAMatrix = np.dot(U, np.dot(np.diag(1.0/np.sqrt(S + epsilon)), U.T)) # [M x M]
return ZCAMatrix
And an example of the usage:
X = np.array([[0, 2, 2], [1, 1, 0], [2, 0, 1], [1, 3, 5], [10, 10, 10] ]) # Input: X [5 x 3] matrix
ZCAMatrix = zca_whitening_matrix(X) # get ZCAMatrix
ZCAMatrix # [5 x 5] matrix
xZCAMatrix = np.dot(ZCAMatrix, X) # project X onto the ZCAMatrix
xZCAMatrix # [5 x 3] matrix
I got the reference from the Keras code available here.
It is very clear that in my case the co-variance matrix will give 784x784 matrix, on which Singular Value Decomposition is performed. It gives 3 matrix that is used to calculate the principal_components, and that principal_components is used to find the ZCA whitened data.
Now my question was
how should I calculate the ZCA whitened image, whether I should use
np.dot(ZCAMatrix, transpose_of_image_to_be_predict) or
np.dot(image_to_be_predict, ZCAMatrix)? Suggestion would be greatly
appreciate.
For this I got the reference from here.
Here I need to use np.dot(image_to_be_predict, ZCAMatrix) to calculate the ZCA whitened image.

Reshaping tensor after max pooling ValueError: Shapes are not compatible

I am building CNN fitting my own data, based on this example
Basically, my data has 3640 features; I have a convolution layer followed by a pooling layer, that pools every other feature, so I end up with dimensions (?, 1, 1819, 1) because 3638 features after conv layer / 2 == 1819.
When I try to reshape my data after pooling to get it in the form [n_samples, n_fetures]
print("pool_shape", pool_shape) #pool (?, 1, 1819, 10)
print("y_shape", y_shape) #y (?,)
pool.set_shape([pool_shape[0], pool_shape[2]*pool_shape[3]])
y.set_shape([y_shape[0], 1])
I get an error:
ValueError: Shapes (?, 1, 1819, 10) and (?, 18190) are not compatible
My code:
N_FEATURES = 140*26
N_FILTERS = 1
WINDOW_SIZE = 3
def my_conv_model(x, y):
x = tf.cast(x, tf.float32)
y = tf.cast(y, tf.float32)
print("x ", x.get_shape())
print("y ", y.get_shape())
# to form a 4d tensor of shape batch_size x 1 x N_FEATURES x 1
x = tf.reshape(x, [-1, 1, N_FEATURES, 1])
# this will give you sliding window of 1 x WINDOW_SIZE convolution.
features = tf.contrib.layers.convolution2d(inputs=x,
num_outputs=N_FILTERS,
kernel_size=[1, WINDOW_SIZE],
padding='VALID')
print("features ", features.get_shape()) #features (?, 1, 3638, 10)
# Max pooling across output of Convolution+Relu.
pool = tf.nn.max_pool(features, ksize=[1, 1, 2, 1],
strides=[1, 1, 2, 1], padding='SAME')
pool_shape = pool.get_shape()
y_shape = y.get_shape()
print("pool_shape", pool_shape) #pool (?, 1, 1819, 10)
print("y_shape", y_shape) #y (?,)
### here comes the error ###
pool.set_shape([pool_shape[0], pool_shape[2]*pool_shape[3]])
y.set_shape([y_shape[0], 1])
pool_shape = pool.get_shape()
y_shape = y.get_shape()
print("pool_shape", pool_shape) #pool (?, 1, 1819, 10)
print("y_shape", y_shape) #y (?,)
prediction, loss = learn.models.logistic_regression(pool, y)
return prediction, loss
How to reshape the data to get any meaningful representation of it and to later pass it to logistic regression layer?
This looks like a confusion between the Tensor.set_shape() method and the tf.reshape() operator. In this case, you should use tf.reshape() because you are changing the shape of the pool and y tensors:
The tf.reshape(tensor, shape) operator takes a tensor of any shape, and returns a tensor with the given shape, as long as they have the same number of elements. This operator should be used to change the shape of the input tensor.
The tensor.set_shape(shape) method takes a tensor that might have a partially known or unknown shape, and asserts to TensorFlow that it actually has the given shape. This method should be used to provide more information about the shape of a particular tensor.
It can be used, e.g., when you take the output of an operator that has a data-dependent output shape (such as tf.image.decode_jpeg()) and assert that it has a static shape (e.g. based on knowledge about the sizes of images in your dataset).
In your program, you should replace the calls to set_shape() with something like the following:
pool_shape = tf.shape(pool)
pool = tf.reshape(pool, [pool_shape[0], pool_shape[2] * pool_shape[3]])
y_shape = tf.shape(y)
y = tf.reshape(y, [y_shape[0], 1])
# Or, more straightforwardly:
y = tf.expand_dims(y, 1)

Gaining intuition from gradient descent update rule

Gradient descent update rule :
Using these values for this rule :
x = [10
20
30
40
50
60
70
80
90
100]
y = [4
7
8
4
5
6
7
5
3
4]
After two iterations using a learning rate of 0.07 outputs a value theta of
-73.396
-5150.803
After three iterations theta is :
1.9763e+04
1.3833e+06
So it appears theta gets larger after the second iteration which suggests the learning rate is too large.
So I set :
iterations = 300;
alpha = 0.000007;
theta is now :
0.0038504
0.0713561
Should these theta values allow me to draw a straight line the data, if so how ? I've just begun trying to understand gradient descent so please point out any errors in my logic.
source :
x = [10
20
30
40
50
60
70
80
90
100]
y = [4
7
8
4
5
6
7
5
3
4]
m = length(y)
x = [ones(m , 1) , x]
theta = zeros(2, 1);
iterations = 300;
alpha = 0.000007;
for iter = 1:iterations
theta = theta - ((1/m) * ((x * theta) - y)' * x)' * alpha;
theta
end
plot(x, y, 'o');
ylabel('Response Time')
xlabel('Time since 0')
Update :
So the product for each x value multiplied by theta plots a straight line :
plot(x(:,2), x*theta, '-')
Update 2 :
How does this relate to the linear regression model :
As the model also outputs a prediction value ?
Yes, you should be able to draw a straight line. In regression, gradient descent is an algorithm used to minimize the cost(error) function of your linear regression model. You use the gradient as a track to travel to the minimum of your cost function and the learning rate determines how quickly you travel down the path. Go too fast and you might pass the global minimum up. When you reached the desired minimum, plug those values of theta into your model to obtain your estimated model. In the one dimensional case, this is a straight line.
Check out this article, which gives a nice introduction to gradient descent.

Resources