How to increase quality of chest x-ray image - image-processing

For now, I have a project about identifying and predicting lung nodule on x-ray images. However, I'm not expert in a field of image processing, so I do not know how to increase the quality of the image before pushing it into the CNN model. An example of image is png file... This is an CXR image
If you have any techniques to improve quality of image, Could you recommend it? I need some helps ~

Related

Comparing denoising algorithms performances using PSNR when there is no clean image (Ground Truth)

I have medical images containing speckle noises, and one of my tasks is to denoise these images. I tried 8 algorithms and inspected the denoised images visually.
I want to use PSNR to quantify the process. Based on the PSNR definition, it is calculated between noisy and clean (without any noises) images.
I there any way to quantify the denoising process performance without having clean images?

What is the optimal image resolution to train an image classifier with CreateML?

I want to create an image classifier model using CreateML. I have images available in very high resolution but that comes at a cost in terms of data traffic and processing time, so I prefer to use images as small as possible.
The docs say that:
The images (...) don’t have to be a particular size, nor do they need to be the same size as each other. However, it’s best to use images that are at least 299 x 299 pixels.
I trained a test model with images of various sizes > 299x299px and the model parameters in Xcode show the dimension 299x299px which I understand is the normalized image size:
This dimension seems to be determined by the CreateML Image Classifier algorithm and is not configurable.
Does it make any sense to train the model with images that are larger than 299x299px?
If the image dimensions are not a square (same height as width) will the training image be center cropped to 299x299px during the process of normalization, or will the parts of the image that are outside the square influence the model?
From reading and experience training image classification models (but no direct inside Apple knowledge), it appears that Create ML scales incoming images to fit a square image 299 x 299. You would be wasting disk space and preprocessing time by providing larger images.
The best documentation I can find is to look at the mlmodel file created by CreateML for an image classifier template. The input is explicitly defined as color image 299 x 299. No option to change that setting in the stand-alone app.
Here is some documentation (applies to Classifer template which uses ScenePrint by default):
https://developer.apple.com/documentation/createml/mlimageclassifier/featureextractortype/sceneprint_revision
There may be a Center/Crop option in the Playground workspace, but I never found it in the standalone app version of Create ML.

Detection of long images fails

I trained a few images and label the live image with the trained data. But when the live image is little far away from the camera my algorithm couldn't detect those images. Is there any way to even detect far images ?
Things I tried :
(*) Trained more images (i.e) increased dataset
(*) Used Image filter - Like medium filter, Gaussian filter
But those things too failed in detecting far images.
The detector couldn't detect images that are 5-6th far from the camera.

What are the standard techniques\libraries used to determine the digital "image quality" of a facial image?

I'm working on a data set which has a good number of blurred, faded, dark, low resolution and noisy face images. I need to eliminate those images in the pre-processing stage, but I can't remove those images manually by subjective speculation.
Which libraries/APIs are used in the open source domain to evaluate the "quality" of the digital face images?
The quality metric most used is mAP (median accuracy percent), where accuracy can be True Positives/Sample size or the jaccard index. So, you will need the ground truth for the dataset.

Effects of image quality/resolution in feature extraction

I am working on a project which identifies objects after capturing their images on Android platform. For this, I extracted features of sample images such as compactness, rectangularity, elongation, eccentricity, roundness, sphericity, lobation, and hu moments. After then, random tree is used as classifier. As I used pictures gathered from Google which are not in high resolution for creating my classifier, captured images of size 1280x720 gives 19/20 correct results when the image is cropped.
However, when I capture images of large sizes such as about 5 megapixels, and crop them for identification, the number of obtained correct results dramaticaly decreases.
Do I need to extract features of images with high resolution and train them in order to get accurate results when high resolution pictures are captured? Is there a way of adjusting extracted features related to the image resolution?
Some feature descriptors are sensitive to scaling. Others, like SIFT and SURF, are not. If you expect the resolution (or scale) of your images to change, it's best to use scale-invariant feature descriptors.
If you use feature descriptors that are not scale-invariant, you can still get decent results by normalizing the resolution of your images. Try scaling the 5 megapixel images to 1280x720 -- do the classification results improve?

Resources