Dicom Registration - IPP and PS - alignment

I have two dicom files with the following Image Patient Position (IPP) and Pixel Spacing (PS)
img1, has PS of 2mm IPP of (-256,-256,-128)
img2 has PS of 2.5mm and IPP of (-206,-201,-128)
For image restration / alignment, I understand I need to have both images
with the same PS and IPP
My first step is to bring img1 to PS of 2.5mm i.e. interpolate
img1 by 1.25 (2.5/2) times to match that of img2.
**Pixel Spacing Calculation:**
2mm = 1pixel
2.5mm = 1/2 *2.5 = 1.25pixel
Do this mean that the IPP of img1 will also change to (-320,-320,-128)
i.e. by 1.25 (-256*1.25) times as well?
Thanks a lot in advance
Ash

No, the IPP defines the position of middle of the top left pixel of the image in patient/world coordinates. You might need to apply the scaling to the position to achieve sub-pixel accuracy.

Related

SIFT detection / FLANN matching results in matches with way too long distance

I'm doing some feature detection/pattern matching based on the OpenCV example code shown at https://docs.opencv.org/3.4/d5/dde/tutorial_feature_description.html
In my example, input2 is an image with a size of 256x256 pixels and input1 is a subimage out of image2 (e.g. with an offset of 5x5 pixels and a size of 200x80 pixels).
Everything works fine: OpenCV detects a larger number of keypoints and descriptors in image2 than in image1 and after matching the two descriptors, "matches" contains exactly as much elements as we had incoming descriptors1.
Until now everything is logical and fits: the number of matches is exactly the number of keypoints/descriptors which are expected in the subimage part.
My problem with this: the elements in "matches" all have a way too back distance value! They all are bigger than 5 (my subimage offset) and most of them are bigger than 256 (the total image size)!
What could be the reason for this?
Thanks!
Update: here you find the image(s) I'm working with:
The whole image is my input2 (don't worry about not having it a size of 256x256 pixels, this is taken out of a screenshot which shows more things). The blue, dashed rectangle in the middle shows my input1 and the circles within this rectangle mark the already detected keypoints1.
The behavior of the code appears to differ between versions 2.x and 3.x.

Finding cropped and scaled similar image

Given several large original images and a small image which is cropped and isotropic-scaled from one of large images, the task is to find where the small image comes from.
cropping usually occur at the center of large image
but exact crop boundary is unknown
the size of small image is about 200x200
again, exact size of small image is unknown
if the size of cropped area is (width, height), the size of small image must be (width * k, height * k), where k < 1.0
I've read some related topics in SO and tried methods like ORB / color histograms, however the accuracy is not acceptable. Would you please give me some advice? Is there any efficient algorithm to deal with this problem? Thank you very much.
The wording you are looking for is template matching, as you want to scan the original image and look for the origin of the cropped and scaled one.
The OpenCV tutorial has an extensive explaination for it

openCV image Stitching wide angle 160 degrees

I'm trying to Stitching image wide angle 160.5 degree but the result is not a good
i'm using OpenCV 4 and ffmpeg to get frames from video
ffmpeg command to get 15 frame per sec :
ffmpeg -i first.mp4 -vf fps=15 preview%05d.jpg
OpenCV Stitching code
import cv2
import numpy as np
images = []
for i in range(70):
name = ('preview%05d.jpg' % (i+1))
print(name)
images.append(cv2.imread(name , cv2.IMREAD_COLOR))
print("start ")
stitcher = cv2.Stitcher_create()
ret, pano = stitcher.stitch(images)
if ret == cv2.STITCHER_OK:
cv2.imshow('panorama', pano)
cv2.waitKey()
cv2.destroyAllWindows()
else:
print(cv2.STITCHER_ERR_NEED_MORE_IMGS)
print(cv2.STITCHER_ERR_HOMOGRAPHY_EST_FAIL)
print(cv2.STITCHER_ERR_CAMERA_PARAMS_ADJUST_FAIL)
print(ret)
print('Error during stiching')
actual result :
expected result :
Before the code line stitcher = cv2.Stitcher_create() you have to append some more algorithms that transform your trapezoid image view into a rectangle image view via the homography method.
use: cv2.findHomography(srcPoints, dstPoints[, method[, ransacReprojThreshold[, mask]]])
srcPoints – Coordinates of the points in the original plane, a matrix of the type CV_32FC2 or vector .
dstPoints – Coordinates of the points in the target plane, a matrix of the type CV_32FC2 or a vector .
See also here for findHomography at OpenCV.
In particular: in your case the base (bottom side of the image) shows most information whereas topside has more non relevant information. Here you should keep the aspect ratio topside the same and narrow the bottom. This should be done for every image. Once done you can try stitching them again.
Approach example to transform Trapezium based image information in e.g. square image:
(information ratio x)
----+++++++---- (1)
---+++++++++--- (1)
--+++++++++++-- (1)
-+++++++++++++- (1)
+++++++++++++++ (1)
into Squared image information:
(information ratio x)
----+++++++---- (1)
----+++++++---- (1.1)
----+++++++---- (1.2)
----+++++++---- (1.3)
----+++++++---- (1.4; most compressed information ratio)
Once this is done you can stitch it. Don't forget to post the result ;-)
Another approach is to treat the camera as a line-inspector. This method you use when you take information from each image for lets say line y1060 to 1080 (e.g. image size 1920x1080px) and then fill a new array with the information from those 20 lines in ascending order.
Update Jan 2019:
As homography appears not to do 100% the job due to the steep 60 degree angle you can try to correct the angle by performing the PerspectiveTransform first.
# you can add a for-loop + image counter here to perform action on all images taken from
# the movie-file. Then its easily replacing numbers in the next part for the name
# of the image.
scr_array = [] # source e.g. pts1 = np.float32([[56,65],[368,52],[28,387],[389,390]])
dest_array = [] # destination e.g. pts2 = np.float32([[0,0],[300,0],[0,300],[300,300]])
Matrix1 = cv2.getPerspectiveTransform(scr_array,dest_array)
dst = cv2.warpPerspective(image, Matrix1 , (cols, rows))
label = 'CarImage1' # use ('CarImage%s' % labelnr) here for automated annotation.
# cv2.imshow(label , dst) # check the image
# cv2.imwrite(('%s.jpg' % label), dst)
See also the docs here on PerspectiveTransform.
Stither is expecting images that have similar parts (up to some perspective transformation). It performs pairwise image registration to find this perspective transform. In your case it won't be able to find it because it simply does not exists.
Additinal step that you must perform prior to stitcher - rectify each image to correct wide angle disportion. To find rectification parameters you will need to do some camera calibrations with calibration targets.

Visualizing RGB bands of RGBN image

I have an RGBN band .tif satellite image of PlanetScope which I would like to preprocess for a neural network. When I view the image in QGIS I get a nice RGB image, however when importing as a numpy array the image is very light. Some information on the image:
Type of the image : <class 'numpy.ndarray'>
Shape of the image : (7327, 7327, 5)
Image Height 7327
Image Width 7327
Image Shape (7327, 7327, 5)
Dimension of Image 3
Image size 268424645
Maximum RGB value in this image 65535
Minimum RGB value in this image 1
The image is uint16 type. The last band (pic[:,:,5]) only shows a singular value (65535) in all instances. Hence, I think this band should be removed leaving the RGBN bands, of which the information is as follows:
Type of the image : <class 'numpy.ndarray'>
Shape of the image : (7327, 7327, 4)
Image Height 7327
Image Width 7327
Image Shape (7327, 7327, 4)
Dimension of Image 3
Image size 214739716
Maximum RGB value in this image 19382
Minimum RGB value in this image 1
The maximum value (19382) of the RGBN image seems pretty low knowing that the range of uint16 images is 0-65535. Subsequently the function 'skimage.io.imshow(image)' shows a nearly white image. I do not understand why QGIS is able to show the image properly in real color but python does not.
The image is loaded by means of pic = skimage.io.imread("planetscope_20180502_43.tif")
I have tried scaling the image with img_scaled = pic / pic.max() and converting it to uint8 before viewing the image with img_as_ubyte(pic) without success. I view the image with skimage.io.imshow(pic).
If necessary the image can be downloaded here. I incorporate the image because somehow it seems not possible to import the image using certain packages (Tifffile for example does not work on this tif file).
The max values of the RGB channels are lower than that of the N channel:
>>> pic.max(axis=(0,1))
array([10300, 7776, 11530, 19382, 65535], dtype=uint16)
But look at the mean values of the RGB channels: they are much smaller than max/2:
>>> pic.mean(axis=(0,1))
array([ 439.14001492, 593.17588875, 542.4638124 , 3604.6826063 ,
65535. ])
You have a high dynamic range (HDR) image here and want to compress its high range to 8 bits for displaying. A linear scaling with the maximum value won't do as the highest peaks are an order of magnitude higher than the average image values. Plotting the histogram of the RGB values:
If you do a linear scaling with some factor that's a bit above the mean and just disregard clipping the rest (now overexposed) values you can display it to see you have valid data:
rgb = pic[..., :3].astype(np.float32) / 2000
rgb = np.clip(rgb, 0.0, 1.0)
But to get a proper image, you will need to look into what the camera response of your data is, and how these HDR images are usually compressed into 8 bits for displaying (I'm not familiar with satellite imaging).
Thank you w-m, I was able to built on that and figured it out. Since w-m already did a neat job to elaborate on the problem, I will just leave the code here that I wrote to resolve the issue:
for i in range(0,4):
min_ = int(np.percentile(image[:,:,i],2))
max_ = int(np.percentile(image[:,:,i],98))
np.maximum(image[:,:,i])
np.minimum(image[:,:,i])
image[:,:,i] = np.interp(image[:,:,i], image[:,:,i].min(), image[:,:,i].max(), (0,255))
image_8bit_scaled = skimage.img_as_ubyte(image)

Compare histograms of specific areas of two images? OpenCV

Basically, I want to be able to compare two histograms, but not of whole images just specific areas. I have image A and have a specific rectangular region on it that I want to compare to another image B. Is there a way to get the histogram of a definable rectangular region on an image? I have the x y position of the rectangular area, as well as it's width and height, and want to get its histogram. I'm using opencv with python.
Sorry if that isn't very clear :(
(I'm setting up a program that takes a picture of a circuit board, and checks each solder pad for consistency with an image of a perfect board. If one pad is off, the program raises a flag saying that specific pad is off by x percent, not the whole board.
Note: The following is in C++ but I think it is not hard to find the equivalent functions for python.
You can find the histogram of an image using this tutorial. So for example for the lena image we get:
In your case, since you have the rectangle coordinates, you can just extract the ROI of the image:
// C++ code
cv::Mat image = cv::imread("lena.png", 0);
cv::Rect roiRect = cv::Rect(150, 150, 250, 250);
cv::Mat imageRoi = image(roiRect);
and then find the histogram of just the ROI with the same way as above:
Is this what you wanted (in theory at least) or I misunderstood?

Resources