I have axial slices of chest CT scans. Now, I want to use all these slices to construct a planar structure like an X-ray with a coronal view (Posterior to Anterior or Anterior to Posterior view).
I have no idea how to proceed with this problem. One way I think is a weighted average of the slices along the coronal plane with more weights to the frontal slices for AP view.
Please share your ideas on how to proceed with this problem. Thanks in advance.
You don't even have to do any weighting. If you want something more advanced, you could go with Siddon-Jacobs (see discussion) or something from reconstruction toolkit.
You can use the MeanProjectionImageFilter in SimpleITK.
I used 3D Slicer to download a sample chest CT and applied the filter using the Simple Filters module:
If you want to create a 2-d x-ray-like image from a 3-d CT volume, you can use ITK's MaximumProjectionImageFilter to do that.
Here's the documentation for the filter:
https://itk.org/Doxygen/html/classitk_1_1MaximumProjectionImageFilter.html
Related
Firstly, English is not my native language hence sorry for my grammar mistakes in advance. I am trying to implement the SIFT feature extraction algorithm. I have couple of question which are not very clear at paper:
What happens at octave boundries when we are searching for local maxima? Do we search just 8+9 neighborhood or create extra layers that we dont use at other steps?
When interpolating the extremas with 2nd order func., do we upscale the downscaled images directly from DoG or interpolate at original downsampled image than upscale for the subpixel accuracy pixel positions?
When interpolating the extremas with 2nd order func., is it at same octave or other octaves are used? I think other octaves must be upsampled before interpolating?
Should image size stay same after convolving with the Gaussian? This will effect the keypoint locations.
Vevaldi provided great implementation http://vision.ucla.edu/~vedaldi/code/sift.html . Because it is at .mex format i can't reach what is going on inside. Other open source codes' solutions haven't satisfy me also. Hence i am asking for your help.
Thank you so much for your valuable answers.
I'm new in the texture recognition field, and I would like to know which are the possible ways to approach a texture problem in opencv.
I need to identify the texture within a region in the pic, and tell if it is uniform, homogeneous in the whole area, or not.
More in depth, I need to be able to tell if a possible fallen person is a person (with many different kind of textures) or something wrong like a pillow, or a blanket.
Could anyone suggest a solution, please?
Is there some already made opencv code to adapt?
Thanks in advance!
Why don't use haralick features? I other words they are called texture features. The base idea is to compute coocurence matrix from given gray-scaled image on base which the haralick features are computed. You can pick between different features like contrast, correlation, entropy etc. which can describe your texture. I guess for the same texture given feature should have the same (similar) value, so that might be the way for distinguishing textures.
Here some links can be helpful:
Coocurence matrix tutorial
Haralik features summary
Coocurence matrix in scikit image
So far as I know, there is no implementation of haralick features in opencv, but you can use python with scikit-image (of course you can use opencv with python if you don't mind using something different than c++).
I am trying to identify which parts of a picture are in focus and which are blurred, something like this:
But HOW to do that? Any ideas on how to mesure this? I've read something about finding the high frequencies but how could it produce a picture like those?
Cheers,
Any image will be the sharpest at its optimum focus. Take advantage of that - run the Sobel operator or the Laplace operator, any kind of difference(derivative) filter. Sum the results pixel by pixel, the image with the highest sum is the best focused one.
Edit:
There will be additional constraints depending on how much additional information you have, e.g. multiple samples, similarity of objects in the image, etc.
Check out this paper for more precision over the Laplace filter. In my problem with 4K images, the Laplace filter was insufficient for detecting blurs and out-of-focus regions.
https://github.com/facebookresearch/DeepFocus
edit: Blur detection with deep learning has a number of approaches. Choose the method that best suits your needs:)
I am looking for an efficient way to detect the small boxes around the numbers (see images)?
I already tried to use hough transformation with no success. Any ideas? I need some hints! I am using opencv...
For inspiration, you can have a look at the
Matlab video sudoku solver demo and explanation
Sudoku Grab, an Iphone App, whose author explains the computer vision part on his blog
Alternatively, if you are always hunting for the same grid you could deploy something like this:
Make a perfect artificial template of the grid and detect or save all coordinates from all corners.
In the target image, do the same thing, for example with Harris points. Be creative, you might also be able to use the distinct triangles that can be found in your images.
Using the coordinates from the template and the found harris points, determine the affine transformation x = Ax' between the template and the target image. That transformation can then be used to map the template grid onto the target image. At the very least this will give you some prior information to help guide further segmentation.
The gist of the idea and examples of the estimation of affine matrix A can be found on the site of Zissermans book Multiple View Geometry in Computer Vision and Peter Kovesi
I'd start by trying to detect the rectangular boundary of the overall sheet, then applying a perspective transform to make it truly rectangular. Crop that portion of the image out. If possible, then try to make the alternating white and grey sub-rectangles have an equal background brightness - maybe try adaptive histogram equalization.
Then the Hough transform might perform better. Alternatively, you could then take an approach that's broadly similar to this demonstration by Robert Bemis on MATLAB Central (it's analysing a DNA microarray image rather than Lotto cards, but it's essentially finding bounding boxes of items arranged in a grid). At a high level, the approach is to calculate the autocorrelation along columns and rows of pixels to detect the periodicity of the items in the grid, and use that to impose a bounding box on each item.
Sorry the above advice is mostly MATLAB-based; I'm afraid I'm not an opencv user, but hopefully it will give you some ideas at least.
What are the ways in which to quantify the texture of a portion of an image? I'm trying to detect areas that are similar in texture in an image, sort of a measure of "how closely similar are they?"
So the question is what information about the image (edge, pixel value, gradient etc.) can be taken as containing its texture information.
Please note that this is not based on template matching.
Wikipedia didn't give much details on actually implementing any of the texture analyses.
Do you want to find two distinct areas in the image that looks the same (same texture) or match a texture in one image to another?
The second is harder due to different radiometry.
Here is a basic scheme of how to measure similarity of areas.
You write a function which as input gets an area in the image and calculates scalar value. Like average brightness. This scalar is called a feature
You write more such functions to obtain about 8 - 30 features. which form together a vector which encodes information about the area in the image
Calculate such vector to both areas that you want to compare
Define similarity function which takes two vectors and output how much they are alike.
You need to focus on steps 2 and 4.
Step 2.: Use the following features: std() of brightness, some kind of corner detector, entropy filter, histogram of edges orientation, histogram of FFT frequencies (x and y directions). Use color information if available.
Step 4. You can use cosine simmilarity, min-max or weighted cosine.
After you implement about 4-6 such features and a similarity function start to run tests. Look at the results and try to understand why or where it doesnt work. Then add a specific feature to cover that topic.
For example if you see that texture with big blobs is regarded as simmilar to texture with tiny blobs then add morphological filter calculated densitiy of objects with size > 20sq pixels.
Iterate the process of identifying problem-design specific feature about 5 times and you will start to get very good results.
I'd suggest to use wavelet analysis. Wavelets are localized in both time and frequency and give a better signal representation using multiresolution analysis than FT does.
Thre is a paper explaining a wavelete approach for texture description. There is also a comparison method.
You might need to slightly modify an algorithm to process images of arbitrary shape.
An interesting approach for this, is to use the Local Binary Patterns.
Here is an basic example and some explanations : http://hanzratech.in/2015/05/30/local-binary-patterns.html
See that method as one of the many different ways to get features from your pictures. It corresponds to the 2nd step of DanielHsH's method.