How can I extract a line profile from an image? - image-processing

I'm trying to extract the line profile from diffraction patterns(2D image).
The ROI is known like (x1,y1) to (x2,y2).
My challenge is to do this on an original source that is a 4D image (scanned real space) x (diff image)
and then convert this to a 3D (scanned real space) x (line profile) data.
A script that only extracts the lineprofile from diffraction would already be helpful.
I think SliceN would be the best for this ?

The script command to extract a Lineprofile of a 2D image with perpendicular averaging is LiveProfile_ExtractLineProfile, and the following example would work on a 2D diffraction pattern:
image DPImg := GetFrontImage()
number kx1 = 77
number ky1 = 77
number kx2 = 175
number ky2 = 175
number pWidth = 10
image profile := LiveProfile_ExtractLineProfile(DPImg,kx1,ky1,kx2,ky2,pwidth)
profile.ShowImage()
If you have a 4D stack, then your're right that you need to use the SliceN command to access the according "plane" or diffraction pattern at specified X/Y. The following script would do that:
image DPStack := GetFrontImage()
number sx = DPStack.ImageGetDimensionSize(0)
number sy = DPStack.ImageGetDimensionSize(1)
number ksx = DPStack.ImageGetDimensionSize(2)
number ksy = DPStack.ImageGetDimensionSize(3)
number px = sx/2
number py = sy/2
image DPImg := DPStack.SliceN( 4,2, px,py,0,0, 2,ksx,1, 3,ksy,1 )
number kx1 = 77
number ky1 = 77
number kx2 = 175
number ky2 = 175
number pWidth = 10
image profile := LiveProfile_ExtractLineProfile(DPImg,kx1,ky1,kx2,ky2,pwidth)
profile.ShowImage()
Putting all of this back into the format you need, is a matter of iteration and data insertion, which you can do with slice commands as well.
Note: The following example script from the DM Scripting database might be a useful reference for you as well.

Related

Texture transformation

I am working on eigen transformation - texture to detect object from an image. This work was published in ACCV 2006 page number 71. Full pdf is available on chapter-3 in this pdf https://www.diva-portal.org/smash/get/diva2:275069/FULLTEXT01.pdf. I am not able to follow after getting texture descriptors.
I am working on the attached image. The image size is 9541440.
I took image patches of 3232 and for every patch calculated eigenvalues and got the texture descriptor. After that what to do with these texture descriptors is what I am not able to follow.
Any help to unblock will be really appreciated. Code looks for calculating descriptors looks like below:
descriptors = np.zeros((gray.shape[0]//w, gray.shape[1]//w))
w = 32
for i in range(gray.shape[0]//w):
temp = []
for j in range(gray.shape[1]//w):
sorted_eigen = -np.sort(-np.linalg.eigvals(gray[i*w:
(i+1)*w,j*w:(j+1)*w]))
l = i*w + 13
k = (i+1)*w
theta_svd = (1/(k-l+1))* np.sum([np.abs(val) for val in s[l:k]])
descriptors[i,j] = theta_svd

Difference between absdiff and normal subtraction in OpenCV

I am currently planning on training a binary image classification model. The images I want to train on are the difference between two original pictures. In other words, for each data entry, I start out with 2 pictures, take their difference, and the label that difference as a 0 or 1. My question is what is the best way to find this difference. I know about cv2.absdiff and then normal subtraction of images - what is the most effective way to go about this?
About the data: The images I'm training on are screenshots that usually are the same but may have small differences. I found that normal subtraction seems to show the differences less than absdiff.
This is the code I use for absdiff:
diff = cv2.absdiff(img1, img2)
mask = cv2.cvtColor(diff, cv2.COLOR_BGR2GRAY)
th = 1
imask = mask>1
canvas = np.zeros_like(img2, np.uint8)
canvas[imask] = img2[imask]
And then this for normal subtraction:
def extract_diff(self,imageA, imageB, image_name, path):
subtract = imageB.astype(np.float32) - imageA.astype(np.float32)
mask = cv2.inRange(np.abs(subtract),(30,30,30),(255,255,255))
th = 1
imask = mask>1
canvas = np.zeros_like(imageA, np.uint8)
canvas[imask] = imageA[imask]
Thanks!
A difference can be negative or positive.
For some number types, such as uint8 (unsigned 8-bit int), which can't be negative (have no sign), a negative value wraps around and the value would make no sense anymore. Other types can be signed (e.g. floats, signed ints), so a negative value can be represented correctly.
That's why cv.absdiff exists. It always gives you absolute differences, and those are okay to represent in an unsigned type.
Example with numbers: a = 4, b = 6. a-b should be -2, right?
That value, as an uint8, will wrap around to become 0xFE, or 254 in decimal. The 254 value has some relation to the true -2 difference, but it also incorporates the range of values of the data type (8 bits: 256 values), so it's really just "code".
cv.absdiff would give you the absolute of the difference (-2), which is 2.

DICOM: alignment

I am relatively new with working with dicom files.Thanks in advance.
I have 2 dicom files of the same patient taken at different intervals.
They are not exactly the same dimensions.
The first one is: dimesions of cube1 104X163X140 and the second one is dimesions of cube2 107X164X140. I would like to align both cubes at the origin and compare them.
The ImagePositionPatient of the first file is: [-207.4748, -151.3715
-198.7500]
The ImagePositionPatient of the second file is: [-207.4500, -156.3500
-198.7500]
Both files have the same ImageOrientationPatient - [ 1 0 0 0 1 0]
Any chance someone could please show me an example? I am not sure how to map the physical plane back to the image plane?
Thanks a lot in advance,
Ash
===============================================================
Added: 23/2/17
I have used the matrix formula below based on the link where in my case :
IPP (Sxyz) of cube 1 = [-207.4748, -151.3715-198.7500]
Xxyz (IOP) = [1,0,0]
Yxyz (IOP) = [1,0,0]
delta_i = 2.5
delta_j = 2.5
So for values of i = 0: 103 and j = 0:162 of cube1, I should compute the values of Pxyz?
What is the next step? Sorry, I do not see how this will help me to align the two cubes with different IPP to the image plane?
Sorry for the newbie question ...
I did not verify the matrix you built. But if it is calculated correctly, you can transform between the volume coordinate system (VCS) (x1,y1,z1), where x1 = column, y1 = row and z1 = slice number to the patient coordinate system (PCS) (x2, y2, z2) - these coordinates define the point within the patient in milimeters.
By inverting the matrix, you can transform back from PCS to VCS.
Let's say, the transformation matrix for volume 1 := M1 and the transformation matrix PCS -> VCS for volume 2 := M2. Then you can transform a point p1 from volume 1 to the corresponding point p2 in volume 2 by transforming it to the PCS using M1 and transforming from PCS to volume 2 using M2' (the inverted M2).
By multiplying M1 and M2', you can calculate a matrix transforming directly from volume1 to volume2.
So:
p2 = (M1 * M2') * p1

How do I create a dataset with multiple images the same format as CIFAR10?

I have images 1750*1750 and I would like to label them and put them into a file in the same format as CIFAR10. I have seen a similar answer before that gave an answer:
label = [3]
im = Image.open(img)
im = (np.array(im))
print(im)
r = im[:,:,0].flatten()
g = im[:,:,1].flatten()
b = im[:,:,2].flatten()
array = np.array(list(label) + list(r) + list(g) + list(b), np.uint8)
array.tofile("info.bin")
but it doesn't include how to add multiple images in a single file. I have looked at CIFAR10 and tried to append the arrays in the same way, but all I got was the following error:
E tensorflow/core/client/tensor_c_api.cc:485] Read less bytes than requested
Note that I am using Tensorflow to do my computations, and I have been able to isolate the problem from the data.
The CIFAR-10 binary format represents each example as a fixed-length record with the following format:
1-byte label.
1 byte per pixel for the red channel of the image.
1 byte per pixel for the green channel of the image.
1 byte per pixel for the blue channel of the image.
Assuming you have a list of image filenames called images, and a list of integers (less than 256) called labels corresponding to their labels, the following code would write a single file containing these images in CIFAR-10 format:
with open(output_filename, "wb") as f:
for label, img in zip(labels, images):
label = np.array(label, dtype=np.uint8)
f.write(label.tostring()) # Write label.
im = np.array(Image.open(img), dtype=np.uint8)
f.write(im[:, :, 0].tostring()) # Write red channel.
f.write(im[:, :, 1].tostring()) # Write green channel.
f.write(im[:, :, 2].tostring()) # Write blue channel.

Blocproc in matlab with two output variables

I have the following problem. I have to compute dense SIFT interest points in a very high dimensional image (182MP). When I run the code in the full image Matlab always close suddently. So I decided to run the code in image patches.
the code
I tried to use blocproc in matlab to call the c++ function that performs the dense sift interest points detection this way:
fun = #(block_struct) denseSIFT(block_struct.data, options);
[dsift , infodsift] = blockproc(ndvi,[1000 1000],fun);
where dsift is the sift descriptors (vectors) and infodsift has the information of the interest points, such as the x and y coordinates.
the problem
The problem is the fact that blocproc just allow one output, but i want both outputs. The following error is given by matlab when i run the code.
Error using blockproc
Too many output arguments.
Is there a way for me doing this?
Would it be a problem for you to "hard code" a version of blockproc?
Assuming for a moment that you can divide your image into NxM smaller images, you could loop around as follows:
bigImage = someFunction();
sz = size(bigImage);
smallSize = sz ./ [N M];
dsift = cell(N,M);
infodsift = cell(N,M);
for ii = 1:N
for jj = 1:M
smallImage = bigImage((ii-1)*smallSize(1) + (1:smallSize(1)), (jj-1)*smallSize(2) + (1:smallSize(2));
[dsift{ii,jj} infodsift{ii,jj}] = denseSIFT(smallImage, options);
end
end
The results will then be in the two cell arrays. No real need to pre-allocate, but it's tidier if you do. If the individual matrices are the same size, you can convert into a single large matrix with
dsiftFull = cell2mat(dsift);
Almost magic. This won't work if your matrices are different sizes - but then, if they are, I'm not sure you would even want to put them all in a single one (unless you decide to horzcat them).
If you do decide you want a list of "all the colums as a giant matrix", then you can do
giantMatrix = [dsift{:}];
This will return a matrix with (in your example) 128 rows, and as many columns as there were "interest points" found. It's shorthand for
giantMatrix = [dsift{1,1} dsift{2,1} dsift{3,1} ... dsift{N,M}];

Resources