Labeling Medical Image - image-processing

I want to extract ground truth from my medical data. I'm looking for a program that can help with this. What I want to do is as follows.
I want to select a specific area and make it white, and I want it to be black in the other area. So I would have ground truth in my hand. There are examples in the pictures. Note: I dont have ground truth, only have original images without ground truth. I need to draw and extract this area from original image...
enter image description hereThank you for your help in advance.
enter image description here

Let's split your image into its two constituent parts, first image.png:
and second, mask.png:
Now you can just use ImageMagick in the Terminal without writing any code. You have a couple of choices. You can either:
make the black parts of the mask transparent in the result, or
make the black parts of the mask black in the result.
Let's make them transparent first, so we are effectively copying the mask into the image and treating it as an alpha/transparency layer:
magick image.png mask.png -compose copyalpha -composite result.png
And now let's make them black, by choosing the darker of the original image and the mask at each pixel location - hence the darken blend mode:
magick image.png mask.png -compose darken -composite result.png
Note that if you use the first technique, the original information that appears transparent is still in the image and can be retrieved - so do not use this technique to hide confidential information.
If you want to use the transparency method from Python with PIL, you can do:
from PIL import Image
# Read image and mask as PIL Images
im = Image.open('image.png').convert('RGB')
ma = Image.open('mask.png').convert('L')
# Merge in mask as alpha channel and save
im.putalpha(ma)
im.save('result.png')
Or, transparency method with OpenCV and Numpy:
import cv2
import numpy as np
# Open image and mask as NMumoy arrays
im = cv2.imread('image.png')
ma = cv2.imread('mask.png', cv2.IMREAD_GRAYSCALE)
# Merge mask in as alpha channel and save
res = np.dstack((im,ma))
cv2.imwrite('result.png', res)
If you want to use the blacken method with Python and PIL/Pillow, use:
from PIL import Image, ImageChops
# Read image and mask as PIL Images
im = Image.open('image.png').convert('RGB')
ma = Image.open('mask.png').convert('RGB')
# Choose darker image at each pixel location and save
res = ImageChops.darker(im, ma)
res.save('result.png')
If you want to use the blacken method with OpenCV and Numpy, use the code above but replace the np.dstack() line with:
res = np.minimum(im, ma[...,np.newaxis])

I can highly recommend ITK - SNAP for this task. You can manually label your input images with certain labels (1 for foreground, 0 for background in your example) and export the groundtruth very comfortably.

Related

Gimp AutoInputLevels in ImageMagick

I am trying to recreate Gimp's AutoInputLevels (Colors>Levels>AutoInputLevels) in Imagemagick (need to batch process 1000 files). This is for an infrared image. I tried contrast_stretch, normalize and auto level, but they didn't help. Any suggestions?
Thanks.
Edit: I won't be able to provide the representative images. However, when I say they didn't help, I am using other operations in GIMP. Doing the same (auto level and hard_light in ImageMagick) are not providing equivalent results.
Adding to #Mark Setchell's answer, I can tweak it a bit and get close using:
Input:
convert redhat.jpg -channel rgb -contrast-stretch 0.6%x0.6% im.png
ImageMagick Result:
GIMP AutoInputLevels Result:
And get a numerical comparison:
compare -metric rmse gimp.png im.png null:
363.484 (0.00554641)
which is about 0.5% difference.
As you didn't provide a representative image, or expected result, I synthesised an image with three different distributions of red, green and blue pixels, using code like this:
#!/usr/bin/env python3
from PIL import Image
import numpy as np
w, h = 640, 480
# Synthesize red channel, mu=64, sigma=3
r = np.random.normal(64, 3, size=h*w)
# Synthesize green channel, mu=128, sigma=10
g = np.random.normal(128, 10, size=h*w)
# Synthesize blue channel, mu=192, sigma=6
b = np.random.normal(192, 6, size=h*w)
# Merge channels to RGB and round
RGB = np.dstack((r,g,b)).reshape((h,w,3)).astype(np.uint8)
# Save
Image.fromarray(RGB).save('result.png')
Then I applied the GIMP AutoInputLevels contrast stretch that you are asking about, and saved the result. The resulting histogram is:
which seems to show that the black-point and white-point levels have been set independently for each channel. So, in ImageMagick I guess you would want to separate the channels, apply some contrast-stretch to each channel independently and then recombine the channels along these lines:
magick input.png -separate -contrast-stretch 0.35%x0.7% -combine result.png
If you provide representative input and output images, you may get a better answer.

How to filter image to throw away stray pixels?

I have image data that comprises mostly roundish images surrounded by boring black background. I am handling this by grabbing the bounding box using PIL's getbbox(), and then cropping. This gives me some satisfaction, but tiny specks of grey within the sea of boring black cause getbbox() to return bounding boxes that are too large.
A deliberately generated problematic image is attached; note the single dark-grey pixel in the lower right. I have also included a more typical "real world" image.
Generated problematic image
Real-world image
I have done some faffing around with UnsharpMask and SHARP and BLUR filters in the PIL ImageFilter module with no success.
I want to throw out those stray gray pixels and get a nice bounding box, but without hosing my image data.
You want to run a median filter on a copy of your image to get the bounding box, then apply that bounding box to your original, unblurred image. So:
copy your original image
apply a median blur filter to the copy - probably 5x5 depending on the size of the speck
get bounding box
apply bounding box to your original image.
Here is some code to get you started:
#!/usr/local/bin/python3
import numpy as np
from PIL import Image, ImageFilter
# Load image
im = Image.open('eye.png').convert('L')
orig = im.copy() # Save original
# Threshold to make black and white
thr = im.point(lambda p: p > 128 and 255)
# Following line is just for debug
thr.save('result-1.png')
# Median filter to remove noise
fil = thr.filter(ImageFilter.MedianFilter(3))
# Following line is just for debug
fil.save('result-2.png')
# Get bounding box from filtered image
bbox = fil.getbbox()
# Apply bounding box to original image and save
result = orig.crop(bbox)
result.save('result.png')

Segmentation problem for tomato leaf images in PlantVillage Dataset

I am trying to do segmentation of leaf images of tomato crops. I want to convert images like following image
to following image with black background
I have reference this code from Github
but it does not do well on this problem, It does something like this
Can anyone suggest me a way to do it ?
The image is separable using the HSV-colorspace. The background has little saturation, so thresholding the saturation removes the gray.
Result:
Code:
import numpy as np
import cv2
# load image
image = cv2.imread('leaf.jpg')
# create hsv
hsv = cv2.cvtColor(image, cv2.COLOR_BGR2HSV)
# set lower and upper color limits
low_val = (0,60,0)
high_val = (179,255,255)
# Threshold the HSV image
mask = cv2.inRange(hsv, low_val,high_val)
# remove noise
mask = cv2.morphologyEx(mask, cv2.MORPH_CLOSE, kernel=np.ones((8,8),dtype=np.uint8))
# apply mask to original image
result = cv2.bitwise_and(image, image,mask=mask)
#show image
cv2.imshow("Result", result)
cv2.imshow("Mask", mask)
cv2.imshow("Image", image)
cv2.waitKey(0)
cv2.destroyAllWindows()
The problem with your image is the different coloration of the leaf. If you convert the image to grayscale, you will see the problem for the binarization algorithm:
Do you notice the very different brightness of the bottom half and the top half of the leaf? This gives you three mostly uniformly bright areas of the image: The actual background, the top-half leaf and the bottom-half leaf. That's not good for binarization.
However, your problem can be solved by separating your color image into it's respective channels. After separation, you will notice that in the blue channel the leaf looks very uniformly bright:
Which makes sense if we think about the colors we are talking about: Both green and yellow have very small amounts blue in it, if any.
This makes it easy for us to binarize it. For the sake of a clearer image, i first applied smoothing
and then used the iso_data Threshold of ImageJ (you can however use any of the existing automatic thresholding methods available) to create a binary mask:
Because the algorithm has set the leaf to background (black), we have to invert it:
This mask can be further improved by applying binary "fill holes" algorithms:
This mask can be used to crop the original image to extract the leaf:
The quality of the result image could be further improved by eroding the mask a little bit.
For the sake of completeness: You do not have to smooth the image, to get a result. Here is the mask for the unsmoothed image:
To remove the noise, you first apply binary fill holes, then binary closing followed by binary erosion. This will give you:
as a mask.
This will lead to

Stitching images using GraphicsMagick with Blending

I have to stitch number of tiles using GraphicsMagick to create one single image. I am currently using -convert with -mosaic with some overlap to stitch tiles. But the stitched image has border where the overlap is done.
Following is the command I am using:
gm convert -background transparent
-page "+0+0" "E:/Images/Scan 001_TileScan_001_s00_ch00.tif"
-page "+0+948" "E:/Images/Scan 001_TileScan_001_s01_ch00.tif"
-page "+0+1896" "E:/Images/Scan 001_TileScan_001_s02_ch00.tif"
-page "+0+2844" "E:/Images/Scan 001_TileScan_001_s03_ch00.tif"
-mosaic "E:/Output/temp/0.png"
The final image looks like this:
How to stitch and Blend without Border?
I've been part of several projects to make seamless image mosaics. There are a couple of other factors you might like to consider:
Flatfielding. Take a shot of a piece of white card with your lens and lighting setup, then use that to flatten out the image lightness. I don't know if GM has a thing to do this, #fmw42 would know. A flatfield image is specific to a lighting setup, lens aperture setting, focus setting and zoom setting, so you need to lock focus/aperture/zoom after taking one. You'll need to do this correction in linear light.
Lens distortion. Some lenses, especially wide-angle ones, will introduce significant geometric distortion. Take a shot of a piece of graph paper and check that the lines are all parallel. It's possible to use a graph-paper shot to automatically generate a lens model you can use to remove geometric errors, but simply choosing a lens with low distortion is easier.
Scatter. Are you moving the object or the camera? Is the lighting moving too? You can have problems with scatter if you shift the object: bright parts of the object will scatter light into dark areas when they move under a light. You need to model and remove this or you'll see seams in darker areas.
Rotation. You can get small amounts of rotation, depending on how your translation stage works and how carefully you've set the camera up. You can also get the focus changing across the field. You might find you need to correct for this too.
libvips has a package of functions for making seamless image mosaics, including all of the above features. I made an example for you: with these source images (near IR images of painting underdrawing):
Entering:
$ vips mosaic cd1.1.jpg cd1.2.jpg join.jpg horizontal 531 0 100 0
Makes a horizontal join to the file join.jpg. The numbers give a guessed overlap of 100 pixels -- the mosaic program will do a search and find the exact position for you. It then does a feathered join using a raised cosine to make:
Although the images have been flatfielded, you can see a join. This is because the camera sensitivity has changed as the object has moved. The libvips globalbalance operation will automatically take the mosaic apart, calculate a set of weightings for each frame that minimise average join error, and reassemble it.
For this pair I get:
nip2, the libvips GUI, has all this with a GUI interface. There's a chapter in the manual (press F1 to view) about assembling large image mosaics:
https://github.com/jcupitt/nip2/releases
Global balance won't work from the CLI, unfortunately, but it will work from any of the libvips language bindings (C#, Python, Ruby, JavaScript, C, C++, Go, Rust, PHP etc. etc.). For example, in pyvips you can write:
import pyvips
left = pyvips.Image.new_from_file("cd1.1.jpg")
right = pyvips.Image.new_from_file("cd1.2.jpg")
join = left.mosaic(right, "horizontal", 531, 0, 100, 0)
balance = join.globalbalance()
balance.write_to_file("x.jpg")
Here is an example using ImageMagick. But since colors are different, you will only mitigate the sharp edge with a ramped blend. The closer the colors are and the more gradual the blend (i.e. over a larger area), the less it will show.
1) Create red and blue images
convert -size 500x500 xc:red top.png
convert -size 500x500 xc:blue btm.png
2) Create mask that is solid white for most and a gradient where you want to overlap them. Here I have 100 pixels gradient for 100 pixel overlap
convert -size 500x100 gradient: -size 500x400 xc:black -append -negate mask_btm.png
convert mask_btm.png -flip mask_top.png
3) Put masks into the alpha channels of each image
convert top.png mask_top.png -alpha off -compose copy_opacity -composite top2.png
convert btm.png mask_btm.png -alpha off -compose copy_opacity -composite btm2.png
4) Mosaic the two images one above the other with an overlap of 100
convert -page +0+0 top2.png -page +0+400 btm2.png -background none -mosaic result.png
See also my tidbit about shaping the gradient at http://www.fmwconcepts.com/imagemagick/tidbits/image.php#composite1. But I would use a linear gradient for such work (as shown here), because as you overlap linear gradients they sum to a constant white, so the result will be fully opaque where they overlap.
One other thing to consider is trying to match the colors of the images to some common color map. This can be done by a number of methods. For example, histogram matching or mean/std (brightness/contrast) matching. See for example, my scripts: histmatch, matchimage and redist at http://www.fmwconcepts.com/imagemagick/index.php and ImageMagick -remap at https://www.imagemagick.org/Usage/quantize/#remap

Compare scanned image (label) with original

There is an original high quality label. After it's been printed we scan a sample and want to compare it with original to find errors in printed text for example. Original and scanned images are almost of the same size (but a bit different).
ImageMagic can do it great but not with scanned image (I suppose it compares it bitwise but scanned image contains to much "noise").
Is there an utility that can so such a comparison? Or may be an algorithm (implemented or easy to implement) - like the one that uses Cauchy–Schwarz inequality in signal processing?
Adding sample pics.
Original:-
Scanned:-
Further Thoughts
As I explained in the comments, I think the registration of the original and scanned images is going to be important as your scans are not exactly horizontal nor the same size. To do a crude registration, you could find some points of high-contrast that are hopefully unique in the original image. So, say I wanted one on the top-left (called tl.jpg), one in the top-right (tr.jpg), one in the bottom-left (bl.jpg) and one in the bottom-right (br.jpg). I might choose these:
[]
[]3
I can now find these in the original image and in the scanned image using a sub-image search, for example:
compare -metric RMSE -subimage-search original.jpg tl.jpg a.png b.png
1148.27 (0.0175214) # 168,103
That shows me where the sub-image has been found, and the second (greyish) image shows me a white peak where the image is actually located. It also tells me that the sub image is at coordinates [168,103] in the original image.
compare -metric RMSE -subimage-search scanned.jpg tl.jpg a.png b.png
7343.29 (0.112051) # 173,102
And now I know that same point is at coordinates [173,102] in the scanned image. So I need to transform [173,102] to [168,103].
I then need to do that for the other sub images:
compare -metric RMSE -subimage-search scanned.jpg br.jpg result.png
8058.29 (0.122962) # 577,592
Ok, so we can get 4 points, one near each corner in the original image, and their corresponding locations in the scanned image. Then we need to do an affine transformation - which I may, or may not do in the future. There are notes on how to do it here.
Original Answer
It would help if you were able to supply some sample images to show what sort of problems you are expecting with the labels. However, let's assume you have these:
label.png
unhappy.png
unhappy2.png
I have only put a red border around them so you can see the edges on this white background.
If you use Fred Weinhaus's script similar from his superb website, you can now compute a normalised cross correlation between the original image and the unhappy ones. So, taking the original label and the one with one track of white across it, they come out pretty similar (96%)
./similar label.png unhappy.png
Similarity Metric: 0.960718
If we now try the more unhappy one with two tracks across it, they are less similar (92%):
./similar label.png unhappy2.png
Similarity Metric: 0.921804
Ok, that seems to work. We now need to deal with the shifted and differently sized scan, so I will attempt to trim them to only get the important stuff and blur them to lose any noise and resize to a standardised size for comparison using a little script.
#!/bin/bash
image1=$1
image2=$2
fuzz="10%"
filtration="-median 5x5"
resize="-resize 500x300"
echo DEBUG: Preparing $image1 and $image2...
# Get cropbox from blurred image
cropbox=$(convert "$image1" -fuzz $fuzz $filtration -format %# info:)
# Now crop original unblurred image and resize to standard size
convert "$image1" -crop "$cropbox" $resize +repage im1.png
# Get cropbox from blurred image
cropbox=$(convert "$image2" -fuzz $fuzz $filtration -format %# info:)
# Now crop original unblurred image and resize to standard size
convert "$image2" -crop "$cropbox" $resize +repage im2.png
# Now compare using Fred's script
./similar im1.png im2.png
We can now compare the original label with a new image called unhappy-shifted.png
./prepare label.png unhappy-shifted.png
DEBUG: Preparing label.png and unhappy-shifted.png...
Similarity Metric: 1
And we can see they compare the same despite being shifted. Obviously I cannot see your images, how noisy they are, what sort of background you have, how big they are, what colour they are and so on - so you may need to adjust the preparation where I have just done a median filter. Maybe you need a blur and/or a threshold. Maybe you need to go to greyscale.

Resources