Related
I am relatively new with working with dicom files.Thanks in advance.
I have 2 dicom files of the same patient taken at different intervals.
They are not exactly the same dimensions.
The first one is: dimesions of cube1 104X163X140 and the second one is dimesions of cube2 107X164X140. I would like to align both cubes at the origin and compare them.
The ImagePositionPatient of the first file is: [-207.4748, -151.3715
-198.7500]
The ImagePositionPatient of the second file is: [-207.4500, -156.3500
-198.7500]
Both files have the same ImageOrientationPatient - [ 1 0 0 0 1 0]
Any chance someone could please show me an example? I am not sure how to map the physical plane back to the image plane?
Thanks a lot in advance,
Ash
===============================================================
Added: 23/2/17
I have used the matrix formula below based on the link where in my case :
IPP (Sxyz) of cube 1 = [-207.4748, -151.3715-198.7500]
Xxyz (IOP) = [1,0,0]
Yxyz (IOP) = [1,0,0]
delta_i = 2.5
delta_j = 2.5
So for values of i = 0: 103 and j = 0:162 of cube1, I should compute the values of Pxyz?
What is the next step? Sorry, I do not see how this will help me to align the two cubes with different IPP to the image plane?
Sorry for the newbie question ...
I did not verify the matrix you built. But if it is calculated correctly, you can transform between the volume coordinate system (VCS) (x1,y1,z1), where x1 = column, y1 = row and z1 = slice number to the patient coordinate system (PCS) (x2, y2, z2) - these coordinates define the point within the patient in milimeters.
By inverting the matrix, you can transform back from PCS to VCS.
Let's say, the transformation matrix for volume 1 := M1 and the transformation matrix PCS -> VCS for volume 2 := M2. Then you can transform a point p1 from volume 1 to the corresponding point p2 in volume 2 by transforming it to the PCS using M1 and transforming from PCS to volume 2 using M2' (the inverted M2).
By multiplying M1 and M2', you can calculate a matrix transforming directly from volume1 to volume2.
So:
p2 = (M1 * M2') * p1
I am simulating one paper that I have read. They have visualized 3D structure Tensor of knee MRI as you can see on the following image:
Visualization of structure tensor
I surf Internet a lot, but I could not find proper response for my questions. I have computated eigenvalues and eigenvectors of 3D structure Tensor as follows:
function [mu3,mu2,mu1,v3x,v3y,v3z,v2x,v2y,v2z,v1x,v1y,v1z]=EigenVectors3DTensor(G1x.^2, G1x.*G1y, G1x.*G1z, G1y.^2, G1y.*G1z, G1z.^2)
% first derivative of each voxel in three directions (x,y,z) have been saved and vectorized in G1x, G1y and G1z, respectively.
Dxx=G1x.^2
v1x=zeros(size(Dxx),'single');
v1y=zeros(size(Dxx),'single');
v1z=zeros(size(Dxx),'single');
v2x=zeros(size(Dxx),'single');
v2y=zeros(size(Dxx),'single');
v2z=zeros(size(Dxx),'single');
v3x=zeros(size(Dxx),'single');
v3y=zeros(size(Dxx),'single');
v3z=zeros(size(Dxx),'single');
mu1=zeros(size(Dxx),'single');
mu2=zeros(size(Dxx),'single');
mu3=zeros(size(Dxx),'single');
rho=1.1; %sigma out
for i=1:numel(Dxx)
Jxx = imgaussian(Dxx(i),rho,4*rho);
Jxy = imgaussian(Dxy(i),rho,4*rho);
Jxz = imgaussian(Dxz(i),rho,4*rho);
Jyy = imgaussian(Dyy(i),rho,4*rho);
Jyz = imgaussian(Dyz(i),rho,4*rho);
Jzz = imgaussian(Dzz(i),rho,4*rho);
M=[Jxx Jxy Jxz; Jxy Jyy Jyz; Jxz Jyz Jzz];
[v,d]=eig(M);
v=v';
ev1=d(1,1); ev2=d(2,2); ev3=d(3,3);
ev1a=abs(ev1); ev2a=abs(ev2); ev3a=abs(ev3);
if((ev1a>=ev2a)&&(ev1a>ev3a))
d=ev3; dt=ev3a; dat=v(3,:);
ev3=ev1; v(3,:)=v(1,:);
ev1=d; ev1a=dt; v(1,:)=dat;
elseif((ev2a>=ev1a)&&(ev2a>ev3a))
d=ev3; dt=ev3a; dat=v(3,:);
ev3=ev1; v(3,:)=v(2,:);
ev2=d; ev2a=dt; v(2,:)=dat;
end
if(ev1a>ev2a)
d=ev2; dat=v(2,:);
ev2=ev1; v(2,:)=v(1,:);
ev1=d; v(1,:)=dat;
end
mu1(i)=ev1; mu2(i)=ev2; mu3(i)=ev3;
v1x(i)=v(1,1); v1y(i)=v(1,2); v1z(i)=v(1,3);
v2x(i)=v(2,1); v2y(i)=v(2,2); v2z(i)=v(2,3);
v3x(i)=v(3,1); v3y(i)=v(3,2); v3z(i)=v(3,3);
end
I have two major questions:
1) Is the calculation of eigenvalues and eigenvectors of structure tensor correct?
2) How can I visualize the structure tensor on my scans (stack of 2D images)?
I will really appreciate if someone help me.
I am getting different shapes for my PCA using sklearn. Why isn't my transformation resulting in an array of the same dimensions like the docs say?
fit_transform(X, y=None)
Fit the model with X and apply the dimensionality reduction on X.
Parameters:
X : array-like, shape (n_samples, n_features)
Training data, where n_samples is the number of samples and n_features is the number of features.
Returns:
X_new : array-like, shape (n_samples, n_components)
Check this out with the iris dataset which is (150, 4) where I'm making 4 PCs:
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
from sklearn.datasets import load_iris
from sklearn.preprocessing import StandardScaler
from sklearn import decomposition
import seaborn as sns; sns.set_style("whitegrid", {'axes.grid' : False})
%matplotlib inline
np.random.seed(0)
# Iris dataset
DF_data = pd.DataFrame(load_iris().data,
index = ["iris_%d" % i for i in range(load_iris().data.shape[0])],
columns = load_iris().feature_names)
Se_targets = pd.Series(load_iris().target,
index = ["iris_%d" % i for i in range(load_iris().data.shape[0])],
name = "Species")
# Scaling mean = 0, var = 1
DF_standard = pd.DataFrame(StandardScaler().fit_transform(DF_data),
index = DF_data.index,
columns = DF_data.columns)
# Sklearn for Principal Componenet Analysis
# Dims
m = DF_standard.shape[1]
K = m
# PCA (How I tend to set it up)
M_PCA = decomposition.PCA()
A_components = M_PCA.fit_transform(DF_standard)
#DF_standard.shape, A_components.shape
#((150, 4), (150, 4))
but then when I use the same exact approach on my actual dataset (76, 1989) as in 76 samples and 1989 attributes/dimensions I get a (76, 76) array instead of (76, 1989)
DF_centered = normalize(DF_mydata, method="center", axis=0)
m = DF_centered.shape[1]
# print(m)
# 1989
M_PCA = decomposition.PCA(n_components=m)
A_components = M_PCA.fit_transform(DF_centered)
DF_centered.shape, A_components.shape
# ((76, 1989), (76, 76))
normalize is just a wrapper I made that subtracts the mean from each dimension.
(Note: this answer is adapted from my answer on Cross Validated here: Why are there only n−1 principal components for n data points if the number of dimensions is larger or equal than n?)
PCA (as most typically run) creates a new coordinate system by:
shifting the origin to the centroid of your data,
squeezes and/or stretches the axes to make them equal in length, and
rotates your axes into a new orientation.
(For more details, see this excellent CV thread: Making sense of principal component analysis, eigenvectors & eigenvalues.) However, step 3 rotates your axes in a very specific way. Your new X1 (now called "PC1", i.e., the first principal component) is oriented in your data's direction of maximal variation. The second principal component is oriented in the direction of the next greatest amount of variation that is orthogonal to the first principal component. The remaining principal components are formed likewise.
With this in mind, let's examine a simple example (suggested by #amoeba in a comment). Here is a data matrix with two points in a three dimensional space:
X = [ 1 1 1
2 2 2 ]
Let's view these points in a (pseudo) three dimensional scatterplot:
So let's follow the steps listed above. (1) The origin of the new coordinate system will be located at (1.5,1.5,1.5). (2) The axes are already equal. (3) The first principal component will go diagonally from what used to be (0,0,0) to what was originally (3,3,3), which is the direction of greatest variation for these data. Now, the second principal component must be orthogonal to the first, and should go in the direction of the greatest remaining variation. But what direction is that? Is it from (0,0,3) to (3,3,0), or from (0,3,0) to (3,0,3), or something else? There is no remaining variation, so there cannot be any more principal components.
With N=2 data, we can fit (at most) N−1=1 principal components.
In LATCH_match.cpp in opencv_3.1.0 the homography matrix is defined and used as:
Mat homography;
FileStorage fs("../data/H1to3p.xml", FileStorage::READ);
...
fs.getFirstTopLevelNode() >> homography;
...
Mat col = Mat::ones(3, 1, CV_64F);
col.at<double>(0) = matched1[i].pt.x;
col.at<double>(1) = matched1[i].pt.y;
col = homography * col;
...
Why H1to3p.xml is:
<opencv_storage><H13 type_id="opencv-matrix"><rows>3</rows><cols>3</cols><dt>d</dt><data>
7.6285898e-01 -2.9922929e-01 2.2567123e+02
3.3443473e-01 1.0143901e+00 -7.6999973e+01
3.4663091e-04 -1.4364524e-05 1.0000000e+00 </data></H13></opencv_storage>
With which criteria these numbers were chosen? They can be used for any other homography test for filtering keypoints (as in LATCH_match.cpp)?
I assume that your "LATCH_match.cpp in opencv_3.1.0" is
https://github.com/Itseez/opencv/blob/3.1.0/samples/cpp/tutorial_code/xfeatures2D/LATCH_match.cpp
In that file, you find:
// If you find this code useful, please add a reference to the following paper in your work:
// Gil Levi and Tal Hassner, "LATCH: Learned Arrangements of Three Patch Codes", arXiv preprint arXiv:1501.03719, 15 Jan. 2015
And so, looking at http://arxiv.org/pdf/1501.03719v1.pdf you will find
For each set, we compare the first image against each of the remaining
five and check for correspondences. Performance is measured using the
code from [16, 17]1 , which computes recall and 1-precision
using known ground truth homographies between the images.
I think that the image ../data/graf1.png is https://github.com/Itseez/opencv/blob/3.1.0/samples/data/graf1.png that I show here:
According to the comment Homography matrix in Opencv? by Catree the original dataset is at http://www.robots.ox.ac.uk/~vgg/research/affine/det_eval_files/graf.tar.gz where it is said that
Homographies between image pairs included.
So I think that the homography stored in file ../data/H1to3p.xml is the homography between image 1 and image 3.
I am trying to extract Rotation matrix and Translation vector from the essential matrix.
<pre><code>
SVD svd(E,SVD::MODIFY_A);
Mat svd_u = svd.u;
Mat svd_vt = svd.vt;
Mat svd_w = svd.w;
Matx33d W(0,-1,0,
1,0,0,
0,0,1);
Mat_<double> R = svd_u * Mat(W).t() * svd_vt; //or svd_u * Mat(W) * svd_vt;
Mat_<double> t = svd_u.col(2); //or -svd_u.col(2)
</code></pre>
However, when I am using R and T (e.g. to obtain rectified images), the result does not seem to be right(black images or some obviously wrong outputs), even so I used different combination of possible R and T.
I suspected to E. According to the text books, my calculation is right if we have:
E = U*diag(1, 1, 0)*Vt
In my case svd.w which is supposed to be diag(1, 1, 0) [at least in term of a scale], is not so. Here is an example of my output:
svd.w = [21.47903827647813; 20.28555196246256; 5.167099204708699e-010]
Also, two of the eigenvalues of E should be equal and the third one should be zero. In the same case the result is:
eigenvalues of E = 0.0000 + 0.0000i, 0.3143 +20.8610i, 0.3143 -20.8610i
As you see, two of them are complex conjugates.
Now, the questions are:
Is the decomposition of E and calculation of R and T done in a right way?
If the calculation is right, why the internal rules of essential matrix are not satisfied by the results?
If everything about E, R, and T is fine, why the rectified images obtained by them are not correct?
I get E from fundamental matrix, which I suppose to be right. I draw epipolar lines on both the left and right images and they all pass through the related points (for all the 16 points used to calculate the fundamental matrix).
Any help would be appreciated.
Thanks!
I see two issues.
First, discounting the negligible value of the third diagonal term, your E is about 6% off the ideal one: err_percent = (21.48 - 20.29) / 20.29 * 100 . Sounds small, but translated in terms of pixel error it may be an altogether larger amount.
So I'd start by replacing E with the ideal one after SVD decomposition: Er = U * diag(1,1,0) * Vt.
Second, the textbook decomposition admits 4 solutions, only one of which is physically plausible (i.e. with 3D points in front of the camera). You may be hitting one of non-physical ones. See http://en.wikipedia.org/wiki/Essential_matrix#Determining_R_and_t_from_E .