I would like to overlay 2 (or more) RGB images in Digital Micrograph by scripting.
Unlike some realimages without color that can be merged by summing the intensity, RGB images should be merged in another way but I have no clue.
Thanks for helping!
You can sum RGB images just like regular images, but your problem is that you need to define what you mean by "overlay".
RGB Images are triplets holding a value for each of the 3 channels RED, GREEN, BLUE and these values are clipped between [0 and 255].
"Summing" RGB images will give you again a triplet, but any value bigger then 255 is truncated to 255, so you will shift more and more towards "white" in the image.
You could define an "overlay" as the mean-values instead, but the effect of "overlaying" then becomes more and more towards "average gray".
Or you could define an "overlay" as the "max-values" or "min-values" of the involved channels.
Or, or, or....
When you think of "overlaying" RGB images, it is helpful to think of other graphic programs like Photoshop which allow you combinining "layers". Usually these programs offer you multiple options ( "overlay, screen, lighten, darken, you name it..." ) which all define a different mathematical relationship between the three color values of the first and the three color values of the second layer.
The commands you need to do this maths are RGB( ), RED( ), GREEN( ), and BLUE( ) as well as simple maths. See the example:
image img1r := RealImage("Red 1",4,256,256)
image img1g := RealImage("Green 1",4,256,256)
image img1b := RealImage("Blue 1",4,256,256)
img1r = icol/iwidth * 256
img1b = iradius/iwidth * 256
img1g = irow/iwidth * 256
RGBImage img1 = RGB(img1r,img1g,img1b)
img1.Setname( "Image 1 (RGB)")
image img2r := RealImage("Red 2",4,256,256)
image img2g := RealImage("Green 2",4,256,256)
image img2b := RealImage("Blue 2",4,256,256)
img2r = (icol%10)<5 ? 256 : 100
img2g = (irow%10)<5 ? 256 : 100
img2b = (iradius%10)<5 ? 256 : 100
RGBImage img2 = RGB(img2r,img2g,img2b)
img2.Setname( "Image 2 (RGB)")
image sumImg = img1 + img2
sumImg.SetName( "SUM" )
image avImg = (img1 + img2)/2
avImg.SetName( "AVERAGE" )
image maxImg = RGB( max(red(img1),red(img2)), max(green(img1),green(img2)), max(blue(img1),blue(img2)))
maxImg.SetName( "Channel MAX" )
image minImg = RGB( min(red(img1),red(img2)), min(green(img1),green(img2)), min(blue(img1),blue(img2)))
minImg.SetName( "Channel MIN" )
// Arrange display
EGUPerformActionWithAllShownImages( "delete" )
minImg.ShowImage()
maxImg.ShowImage()
avImg.ShowImage()
sumImg.ShowImage()
img2.ShowImage()
img1.ShowImage()
TagGroup layout = SLMCreateGridLayout( 2 , 3 )
EGUArrangeAllShownImagesInLayout( layout )
It should also be noted that some "overlay" combinations are not based on the Red/Green/Blue (RGB) color model, but on the alternative Hue/Saturation/Brightness (HSB) color model.
DigitalMicrograph scripting does natively only support RGB, but you can do the maths yourself.
You might also find it useful to look at the examples script "Display as HSB.s" on the Gatan script example site.
You can script image merging very simply with ImageMagick which is installed on most Linux distros and is available for OSX and Windows.
As you have not provided any sample images, I have made a couple - image1.png and image2.png like this:
Now, there are lots of Blend Modes available - some of the more common ones are Lighten, Darken, Overlay, Blend. So, let's try a few at the command line in Terminal:
convert image1.png image2.png -compose darken -composite result.png
convert image1.png image2.png -compose lighten -composite result.png
convert image1.png image2.png -compose overlay -composite result.png
The options are endless - you can get a list of the blend modes available in ImageMagick like this:
identity -list compose
Output
Atop
Blend
Blur
Bumpmap
ChangeMask
Clear
ColorBurn
ColorDodge
Colorize
CopyBlack
CopyBlue
CopyCyan
CopyGreen
Copy
CopyMagenta
CopyOpacity
CopyRed
CopyYellow
Darken
DarkenIntensity
DivideDst
DivideSrc
Dst
Difference
Displace
Dissolve
Distort
DstAtop
DstIn
DstOut
DstOver
Exclusion
HardLight
HardMix
Hue
In
Lighten
LightenIntensity
LinearBurn
LinearDodge
LinearLight
Luminize
Mathematics
MinusDst
MinusSrc
Modulate
ModulusAdd
ModulusSubtract
Multiply
None
Out
Overlay
Over
PegtopLight
PinLight
Plus
Replace
Saturate
Screen
SoftLight
Src
SrcAtop
SrcIn
SrcOut
SrcOver
VividLight
Xor
Here are all the options:
Related
I want to run some small images/sprites through OCR (Tesseract, probably) and extract a number or words out of it, and I know these number/words will be of a specific color (let's say white on a noisy/colored background).
While reading about pre-processing images for OCR, I thought it would be really beneficial to just remove everything that's not white from the image.
I'm using both imagemagick and vips but I have no idea where to start, what operations to use and how to search for it.
If we make a sample image like this:
magick -size 300x100 xc: +noise random -gravity center -fill white -pointsize 48 -annotate 0 "Hello" captcha.png
You can then fill with black anything that is not white:
magick captcha.png -fill black +opaque white result.png
If you want to accept colours close to white as being white, you can include some "fuzz":
magick captcha.png -fuzz 10% -fill black +opaque white result.png
There was a discussion on the libvips tracker a few months ago about techniques for background removal:
https://github.com/libvips/libvips/issues/1567
Here's the filter:
#!/usr/bin/python3
import sys
import pyvips
image = pyvips.Image.new_from_file(sys.argv[1], access="sequential")
# aim for 250 for paper with low freq. removal
# ink seems to be slightly blueish
paper = 250
ink = [150, 160, 170]
# remove low frequencies .. don't need huge accuracy
low_freq = image.gaussblur(20, precision="integer")
image = image - low_freq + paper
# pull the ink down
ink_target = 30
scale = [(paper - ink_target) / (paper - i) for i in ink]
offset = [ink_target - i * s for i, s in zip(ink, scale)]
image = image * scale + offset
# find distance to white of each pixel ... small distances go to white
white = [100, 0, 0]
image = image.colourspace("lab")
d = image.dE76(white)
image = (d < 12).ifthenelse(white, image)
# boost saturation (scale ab)
image = image * [1, 2, 2]
image.write_to_file(sys.argv[2])
It removes low frequences (ie. paper folds etc.), stretches the contrast range, finds pixels close to white in CIELAB and moves them to white, and boosts saturation.
You'd probably need to tune it a bit for your use-case. Post some sample images if you need more advice.
I'm no expert in this area, but maybe try changing all pixels with RGB values below a certain threshold to black, or delete them?
As I mentioned before, I'm not very knowledgeable in any of this, but I don't see why this wouldn't work.
If the images are synthetic and uncompressed, you can test for strict equality of the RGB values. Otherwise, use a threshold on the distance between the RGB triples (Euclidean or Manhattan for instance).
If you want to allow variations in the lightness but not in the color, you can convert to HLS and compare HS.
I'm using ImageMagick 6.8 and I have LUT color table created in text format:
# ImageMagick pixel enumeration: 848,1,255,srgb
0,0: (0 , 0 , 0 ) #000000
1,0: (226, 226, 224) #E2E2E0
2,0: (48 , 74 , 0 ) #304A00
# ...
# few hundred more colors
Which has one colour per grayscale value (between 0 and 848 in my use case).
So, I want to convert a grayscale image to RGB one, using this LUT without any fancy gamma corrections, colour space remaps, interpolations and etc. Just straight replacement. How to do it?
Current issues start since the beginning:
Trying to convert lut.txt lut.png with various options always give me more colours than they are actually. In the LUT, there are 540 unique colours, but inspecting the generated PNG, or even identify lut.txt reports 615! This means that the LUT is not interpreted straight at all.
On the other hand, even if I succeed to read the LUT exactly, or probably avoid converting it to PNG, there comes another problem. Using -clut maps the whole greyscale range (0-65535) to the LUT, so I guess I have to normalize it first. But this screws up the greyscales input to begin with.
P.S. An answer which might be useful here is, if there is image format with bigger than 8-bit indexed palette. Then that text LUT be used as its palette and the greyscale raster as its pixel values.
In Imagemagick, use -clut to process a grayscale image with a colored look-up table image to colorize the grayscale image.
First create a 3-color color table LUT image with red, green and blue hex colors. I show an enlarged version.
convert xc:"#ff0000" xc:"#00ff00" xc:"#0000ff" +append colortable.gif
Here is the input - a simple gradient that I will colorize.
Now apply the color table image to the gradient using -clut.
convert gradient.png colortable.gif -clut gradient_colored.png
The default is a linear interpolation. But if you only want to see the 3 colors, then use -interpolate nearest-neighbor.
convert gradient.png colortable.gif -interpolate nearest-neighbor -clut gradient_colored2.png
I've got a PNG image with transparency:
original.png
Now I want to use ImageMagick to apply a diagonal gradient to its alpha channel. I mean so that its opacity remains in the top left corner, and gradually fades out to completely transparent in the bottom right corner. Like this:
result.png
So basically I want to generate a gradient, and use that as a mask for the image. But the image already has an alpha channel (transparency) of its own. Here's a visualisation of what I'm trying:
(original and result here displayed on checkerboard for visiblity, but I mean actual transparency)
I think I understand how to generate a diagonal gradient (the barycentric gradient command is very useful for this). But this creates a gradient in the color channels i.e. a colored or grayscale gradient. Whereas I want to apply the gradient on the alpha channel.
From the IM manual I understand the -compose CopyOpacity operator could be used for this. However this seems to copy the alpha from the mask on to my image. I need to "apply" this gradient color on my existing alpha channel, so basically I need my image's alpha channel to be multiplied by the grayscale color from the gradient image.
What would be the correct IM command line to perform the operation displayed above?
Here is one way you could do it:
convert tree.png -write MPR:orig -alpha extract \
\( +clone -colorspace gray -fx "1-j/h" \) \
-compose multiply -composite -write alpha.png \
MPR:orig +swap -compose copyopacity -composite result.png
The -write alpha.png can be omitted - it just shows the alpha layer for debugging and illustration purposes.
The MPR is just a temporary copy of the original image that I hold in memory while I am dinking around with the alpha channel and which I bring back near the end. The gradient in the alpha channel is generated by the -fx and I made the colorspace gray first so it only has to run once, instead of three times.
If you knew the dimensions of the tree image up front, you could replace the part in parentheses with:
-size WxH gradient:black-white
but I don't know the dimensions up front and I don't want a second convert command to get them, so I basically clone the original image's alpha channel to get a canvas the right size and fill it in with -fx.
I have image like this from my windstation
I have tried get thoose lines recognized, but lost becuase all filters not recognize lines.
Any ideas what i have use to get it black&white with at least some needed lines?
Typical detection result is something like this:
I need detect edges of digit, which seams not recognized with almost any settings.
This doesn't provide you with a complete guide as to how to solve your image processing question with opencv but it contains some hints and observations that may help you get there. My weapon of choice is ImageMagick, which is installed on most Linux distros and is available for OS X and Windows.
Firstly, I note you have date and time across the top and you haven't cropped correctly at the lower right hand side - these extraneous pixels will affect contrast stretches, so I crop them off.
Secondly, I separate your image in 3 channels - R, G and B and look at them all. The R and B channels are very noisy, so I would probably go with the Green channel. Alternatively, the Lightness channel is pretty reasonable if you go to HSL mode and discard the Hue and Saturation.
convert display.jpg -separate channel.jpg
Red
Green
Blue
Now make a histogram to look at the tonal distribution:
convert display.jpg -crop 500x300+0+80 -colorspace hsl -separate -delete 0,1 -format %c histogram:png:ahistogram.png
Now I can see all your data are down the dark, left-hand end of the histogram, so I do a contrast stretch and a median filter to remove the noise
convert display.jpg -crop 500x300+0+80 -colorspace hsl -separate -delete 0,1 -median 9x9 -normalize -level 0%,40% z.jpg
And a final threshold to get black and white...
convert display.jpg -crop 500x300+0+80 -colorspace hsl -separate -delete 0,1 -median 9x9 -normalize -level 0%,40% -threshold 60% z.jpg
Of course, you can diddle around with the numbers and levels, but there may be a couple of ideas in there that you can develop... in OpenCV or ImageMagick.
Given a jpeg, what is the formula to change the exposure of that jpeg by +/-1 stop or as known as 1 EV? I want to simulate this exposure change. Is there a formula/ method to do so?
I can demonstrate that using ImageMagick, which is included in most Linux distros and available for OSX and Windows from here.
First, at the Terminal command line create an image:
convert -size 512x512 gradient:black-yellow gradient.png
Now, the way to effect +1 stop exposure increase is to composite the image with itself using the Screen blending mode - it is available in Photoshop and ImageMagick and is described here.
So, the formula to composite image A with image B is:
1-stop brighter image = 1-(1-A)(1-B)
but as we are compositing the image with itself, A and B are the same, so we effectively have
1-(1-A)(1-A)
ImageMagick refers to the pixels of an image using p rather than A, so we can do a 1-stop increase like this:
convert gradient.png -colorspace RGB -fx "(1-(1-p)(1-p))" result.png
Note that the Wikipedia article, and ImageMagick's -fx both assume your pixel intensities vary between 0 and 1.0. If you are using 8-bit images, you should calculate with 255 in place of 1, namely
+1 stop brighter image = 255-(255-A)(255-A)
or if using 16-bit values
+1 stop brighter image = 65535-(65535-A)(65535-A)
The above fx-based method is however, very slow because the -fx is interpreted rather than compiled, so a faster way to do it is:
convert gradient.png gradient.png -colorspace RGB -compose screen -composite screen.png
Just for fun, another way of looking at that is that we take the inverse of A, that is 1-A, and square it, and then take the inverse, so it can be done like this:
convert gradient.png -colorspace RGB -negate -evaluate pow 2 -negate result.png
The equivalent of -1 stop exposure decrease is to composite the image with itself using the Multiply blend mode, the formula being
1-stop darker image = A x B
which you would do faster with
convert gradient.png gradient.png -colorspace RGB -compose multiply -composite result.png
or even faster, by using memory-to-memory cloning rather than reading from disk twice, with
convert gradient.png -colorspace RGB +clone -compose multiply -composite result.png
but could do equally with
convert gradient.png -colorspace RGB -evaluate pow 2 result.png