I'm looking for local and global descriptors for medical image processing. I know about SIFT/SURF/GLOH/HOG, that are mainly applied to computer vision problems, but I would like to know if they are also applied to medical images to describe features or if there are specific descriptors in this field.
I would really appreciate any hint.
Thanks in advance,
Federico
If you want to use standard SIFTs for the multimodal matching, you have to a adjust it a little bit - make it invariant to image inversion.
There is a good paper about it by Kelman et.al "Keypoint Descriptors for Matching Across Multiple Image Modalities and Non-linear Intensity Variations"
There are also more special descriptors for multimodal matching, see "An efficient approach for robust multimodal retinal image registration based on UR-SIFT features and PIIFD descriptors" by Ghassabi et.al.
I assumed you need the descriptors for matching.
I'd personally submitted a poster submission and got it accepted for using SIFT as part of the feature detection and matching framework that my work was intended to do.
The feature detection methods you mentioned are good for general images and will work as a good general initial input for your framework, too. Now, since every anatomical region and every modality lives in its own feature domain(ie. brain regions done by MR, live regions done by CT, they all probably imply distinctive landmarks); its best that you first identify what it is unique in your or near your target anatomical region and then see if the aforementioned algorithms would locate your distinctive features(distinctive enough that it has to be in your region and no where else), then find ways to differentiate from the bag of features(that get detected along with your distinctive features). And the result sets would be the key features/descriptors that you would like to keep.
So, Yes, many feature detection algorithms have been extensively used for various areas in medical imaging.
Related
I'm looking for a way to detect humans in a picture. For instance, regarding the picture below, I'd like to coarsely determine how many people are in the scene. I must be able to detect both standing and sitting people. I do not mind not detecting people located behind a physical object (such as the glass in the bus picture).
AFAIK, such a problem can rather easily be solved by training deep neural networks. However, my coworkers would like me to also implement a detection technique based on general image processing techniques. I've spent several days looking for techniques designed by researchers but I couldn't find anything else than saliency-based techniques (which may be fine, but I'd like to test several techniques based on old-fashioned image processing).
I'd like to mention that I'm not new to the topic of image segmentation & I used to segment aortas in medical scans. However, this task was easier IMHO since scanners have similar features: in this use-case (human detection in a bus, for instance), the pictures will have very different characteristics (e.g. image contrast can strongly vary, whether it's been taken during the day or at night).
Long story short, I'd like to know if there's some segmentation technique for human detection for which it'd be interesting giving a shot, given the fact that the images features vary a lot?
Is deep learning the only way to detect humans in a picture?
No. Is it the best way we know? Depends on your conditions.
The simplest way of detection is to generate lots of random bounding boxes and then solving the classification problem of the crop. Here is some pythonic pseudo-code:
def detect_people(image):
"""
Find all people in image.
Parameters
----------
image : image object
Returns
-------
people : list of axis-aligned bounding boxes (aabb)
Each bounding box contains a person
"""
people = []
for aabb in generate_random_aabb(image):
crop = crop_image(image, aabb)
if is_person(crop):
people.append(crop)
return people
In this case is_person can be any classifier, e.g. boosted decision stumps as used in the Viola–Jones object detection framework. Speaking of which: That would likely be the way to go without DL, but is much more complicated to explain.
Object Detection vs Segmentation
Your question mixes both. Object detection gives you bounding boxes (coarse) for instances. Semantic segmentation labels all pixels by classes, but does not distinguish different instances of the same class (e.g. different people). Instance segmentation is like object detection, but is fine-grained and aims for pixel-exact results.
If you are interested in segemantation, I can recommend my paper: A Survey of Semantic Segmentation
This question is for those who have tried feature detection/matching methods on brain images - it is a broad one, and perhaps a bad one:
How could you tell if the method you used was "good enough?"
What does a successful matching/detection test look like for your data?
EDIT:
As of now, I am not trying to detect any distinct features in particular.
I'm using OpenCV's ORB, SIFT, SURF, etc detection methods, and seeing what they identify for features.
Sometimes, however, the orientation of the brain changes entirely from a
few set of images to the next set, so if I compare two images from these sets,the detection methods won't yield any effective
results (i.e. the matching will be distinctly, completely off). But if I compare images that look similar, but not identical,
the detection seems to work alright. Point is, it seems like detection works for frames that were taken around the same
time, but not over a long interval. I wonder if others have come across this and if they have found that detection methods
are still useful despite the fact.
First of all, you should specify what kind of features or for which purpose, the experiment is going to be performed.
Feature extraction is highly subjective in nature, it all depends on what type of problem you are trying to handle. There is no generic feature extraction scheme which works in all cases.
For example if the features are pointing out to some tumor classification or lesion, then of course there are different softwares you can use to extract and define your features.
There are different methods to detect the relevant features regarding to the application:
SURF algorithm (Speeded Up Robust Features)
PLOFS: It is a fast wrapper approach with a subset evaluation.
ICA or 'PCA
This paper is a very great review about brain MRI data feature extraction for tissue classification:
https://pdfs.semanticscholar.org/fabf/a96897dcb59ad9f04b5ff92bd15e1bd159ef.pdf
I found this paper very good o understand the difference between feature extraction techniques.
https://www.sciencedirect.com/science/article/pii/S1877050918301297
I work at an airport where we need to determine the visibility conditions of pilots.
To do this, we have signs placed every 200 meters along the runway that allow us to determine how far the visibility is. We have multiple runways, and the visibility needs to be checked every hour.
Right now the visibility check is done manually with a human being who looks at the photos from the cameras placed at the end of each runway. So it can be tedious.
I'm a programmer who has very little experience with machine learning, but this sounds like an easy problem to automate. How should I approach this problem? Which algorithms should I study? Would OpenCV help me?
Thanks!
I think this can be automated using computer vision techniques. openCV could make the implementation easier. If all the signs are similar then ,we can train our program to recognize the sign in a specific conditions(lights). Then, we can use the trained classifier to check for the visibility of signs every hours using a simple script.
There is harr-like feature extraction already in openCV. You can use to train classifier which will output a .xml file and use that .xml file for detecting the sign regularly.
I have done a similar project RTVTR(Real Time Vehicle Tracking and Recognition) using openCV and it worked great. http://www.youtube.com/watch?v=xJwBT76VEZ4
Answering to your questions:
How should I approach this problem?
It depends on the result you want/need to obtain. Is this an "hobby" project (even if job-related) or do you need to build a machine vision system to solve the problem and should it be compliant with some regulations or standard?
Which algorithms should I study?
I am very interested in your question but I am not an expert in the field of meteorology and so searching in the relative literature is, for me, a time consuming task... so I reserve to update this part of the answer in the future. I think there will be different algorithms involved in the solution of the problem, some are very general like for example algorithms for the image segmentation, some are very specific like for example how to measure the visibility.
Update: one of the keyword for searching in the literature is Meteorological Visibility, for example
HAUTIERE, Nicolas, et al. Automatic fog detection and estimation of visibility distance through use of an onboard camera. Machine Vision and Applications, 2006, 17.1: 8-20.
LENOR, Stephan, et al. An Improved Model for Estimating the Meteorological Visibility from a Road Surface Luminance Curve. In: Pattern Recognition. Springer Berlin Heidelberg, 2013. p. 184-193.
Would OpenCV help me?
Yes, I think OpenCV can help giving you a starting point.
An idea for a naïve algorithm:
Segment the image in order to get the pixel regions belonging to the signs and to the background.
Compute the measure of visibility according to some procedure, the measure is computed by a function that has as input the regions of all the signs and the background region.
The segmentation can be simplified a lot if the signs are always in the same fixed and known position inside the image.
The measure of visibility is obviously the core of the algorithm and it can be performed in a lot of ways...
You can follow a simple approach where you compute the visibility with a mathematical formula based on the average gray level of the signs and background regions.
You can follow a more sophisticated and machine-learning oriented approach where you implement an algorithm that mimics your current human being based procedure. In this case your problem can be framed as a supervised learning task: you have a set of training examples, each training example is a pair composed by a) the photo of the runway (the input) and b) the visibility related to that photo and computed by human (the desired output). Then the system is trained by means of the training set and when you give a new photo as input it will give you back the visibility measure. I think you have a log for past visibility measures (METAR?) and if you saved the related images too, you will already have a relevant amount of data in order to build a training set and a test set.
Update in the age of Convolutional Neural Networks:
YOU, Yang, et al. Relative CNN-RNN: Learning Relative Atmospheric Visibility from Images. IEEE Transactions on Image Processing, 2018.
Both Tensor and uvts_cvs 's replies are very helpful. While the opencv mainly aims to recognize the sign pattern or even segment it from the background, when you extract the core feature in your problem : visibility, you may still need to include the background signal in your training set. I assume manual check of visibility is based on image contrast, if so, the signal-to-noise ratio(SNR) or contrast-to-noise ratio(CNR) is a good feature in learning. A threshold is defined to classify 'visible-1' and 'invisible-0'. The SNR/CNR can be obtained automatically especially if your sign position and size are fixed in your camera images.
Gather whole bunch of photos and videos and propose it as a challenge on Kaggle. I am sure many people would like to try solve it, even if reward would not be very high.
You can use the template matching functionality of openCV:
http://docs.opencv.org/doc/tutorials/imgproc/histograms/template_matching/template_matching.html
Where the template is the sign. If you manage to find a correct match, then the sign is visible. I think you can also get a sense of the scale of the sign in the image from that code.
As this is a very controlled and static environment, you have perfect conditions to estimate the visibility with vision-based approaches. Nonetheless, it is not so easy to decide which approach to take. In my thesis, I am reviewing this topic in depth for the less well-controlled environment of road traffic. See: LENOR, Stephan. Model-Based Estimation of Meteorological Visibility in the Context of Automotive Camera Systems. 2016. Doktorarbeit. (https://archiv.ub.uni-heidelberg.de/volltextserver/20855/1/20160509_lenor_thesis_final_print.pdf).
I see two major directions you could follow up:
Model-based approaches: Advantages: Not so much dependent on your very specific setup. You do not need heavy collection of data.
Data-based approaches/ML: Advantages: Can hide the whole complexity of different light and weather conditions. You seem to have a good source of data if there are people doing the job right now. Very promising without much engineering effort (just use a light-weighted CNN with few layers or so).
You could also combine both, etc. etc. If you are still interested in a solution, you can contact me again and I am happy to consult in more depth.
I've been tasked with classifying 350k documents into "signed" and "not signed" piles. What is the fastest way to search for something that looks like a human signature with open source tools? To compound the problem, I need to assume each document is unique length and signature location. Does anyone have any ideas?
With a little bit of searching on Google, I have founded this article of IEEE. You will need to have an account to be able to read it.
Here is the description:
Detecting and segmenting free-form objects from cluttered backgrounds
is a challenging problem in computer vision. Signature detection in
document images is one classic example and as of yet no reasonable
solutions have been presented. In this paper, we propose a novel
multi-scale approach to jointly detecting and segmenting signatures
from documents with diverse layouts and complex backgrounds. Rather
than focusing on local features that typically have large variations,
our approach aims to capture the structural saliency of a signature by
searching over multiple scales. This detection framework is general
and computationally tractable. We present a saliency measure based on
a signature production model that effectively quantifies the dynamic
curvature of 2D contour fragments. Our evaluation using large real
world collections of handwritten and machine printed documents
demonstrates the effectiveness of this joint detection and
segmentation approach.
I have made a videochat, but as usual, a lot of men like to ehm, abuse the service (I leave it up to you to figure the nature of such abuse), which is not something I endorse in any way, nor do most of my users. No, I have not stolen chatroulette.com :-) Frankly, I am half-embarassed to bring this up here, but my question is technical and rather specific:
I want to filter/deny users based on their video content when this content is of offending character, like user flashing his junk on camera. What kind of image comparison algorithm would suit my needs?
I have spent a week or so reading some scientific papers and have become aware of multiple theories and their implementations, such as SIFT, SURF and some of the wavelet based approaches. Each of these has drawbacks and advantages of course. But since the nature of my image comparison is highly specific - to deny service if a certain body part is encountered on video in a range of positions - I am wondering which of the methods will suit me best?
Currently, I lean towards something along the following (Wavelet-based plus something I assume to be some proprietary innovations):
http://grail.cs.washington.edu/projects/query/
With the above, I can simply draw the offending body part, and expect offending content to be considered a match based on a threshold. Then again, I am unsure whether the method is invariable to transformations and if it is, to what kind - the paper isn't really specific on that.
Alternatively, I am thinking that a SURF implementation could do, but I am afraid that it could give me false positives. Can such implementation be trained to recognize/give weight to specific feature?
I am aware that there exist numerous questions on SURF and SIFT here, but most of them are generic in that they usually explain how to "compare" two images. My comparison is feature specific, not generic. I need a method that does not just compare two similar images, but one which can give me a rank/index/weight for a feature (however the method lets me describe it, be it an image itself or something else) being present in an image.
Looks like you need not feature detection, but object recognition, i.e. Viola-Jones method.
Take a look at facedetect.cpp example shipped with OpenCV (also there are several ready-to-use haarcascades: face detector, body detector...). It also uses image features, called Haar Wavelets. You might be interested to use color information, take a look at CamShift algorithm (also available in OpenCV).
This is more about computer vision. You have to recognize objects in your image/video sequence, whatever... for that, you can use a lot of different algorithms (most of them work in the spectral domain, that's why you will have to use a transformation).
In order to be accurate, you will also need a knowledge base or, at least, some descriptors that will define the object.
Try OpenCV, it has some algorithms already implemented (and basic descriptors included).
There are applications/algorithms out there that you can "train" (like neural networks) and are able to identify objects based on the training. Most of them (at least, the good ones) are not very popular and can only be found in research groups specialized in computer vision, object recognition, AI, etc.
Good luck!