ImageMagick's composite on HSL (not HSB nor HSV) - imagemagick

Just I want to do is to replace Photoshop's HSL-based blend modes (color/hue/saturation/luminosity) by writing a CUI tool.
Better if I can do it via RMagick.
ImageMagick can manage HSL colorspace, but ImageMagick's composite operators Colorize/Hue/Saturation/Luminize are hard-coded to be based on HSB colorspace.
Is there any workaround without writing pixel-by-pixel processing code?
Thanks.

I tried the separate-and-combine approach.
Then a story has begun.
ImageMagick-6.6.9-7 has a pinpointing bug with rgb<->hsl calculation.
Ubuntu 12.04 LTS's package repository provides it... grrrr
(ImageMagick itself, fixed at r4431 and good with >= 6.6.9-9)
Then I sit down and do the math, to obtain a simple -fx expression.
colorize_hsl.fx:
ul = u.lightness; vl = v.lightness;
bias = (ul < .5 ? ul : 1 - ul)/(vl < .5 ? vl : 1 - vl);
(v - vl)*bias + ul
That is an rgb-based formula to set new lightness and preserve its hue and saturation.
To get luminize_hsl, exchange u and v.
Temporary vars (ul, vl and bias) are common in all channels,
but -fx engine might try it 3 times.
It's not enough...

Related

Implementing convolution from scratch in Julia

I am trying to implement convolution by hand in Julia. I'm not too familiar with image processing or Julia, so maybe I'm biting more than I can chew.
Anyway, when I apply this method with a 3*3 edge filter edge = [0 -1 0; -1 4 -1; 0 -1 0] as convolve(img, edge), I am getting an error saying that my values are exceeding the allowed values for the RGBA type.
Code
function convolve(img::Matrix{<:Any}, kernel)
(half_kernel_w, half_kernel_h) = size(kernel) .÷ 2
(width, height) = size(img)
cpy_im = copy(img)
for row ∈ 1+half_kernel_h:height-half_kernel_h
for col ∈ 1+half_kernel_w:width-half_kernel_w
from_row, to_row = row .+ (-half_kernel_h, half_kernel_h)
from_col, to_col = col .+ (-half_kernel_h, half_kernel_h)
cpy_im[row, col] = sum((kernel .* RGB.(img[from_row:to_row, from_col:to_col])))
end
end
cpy_im
end
Error (original)
ArgumentError: element type FixedPointNumbers.N0f8 is an 8-bit type representing 256 values from 0.0 to 1.0, but the values (-0.0039215684f0, -0.007843137f0, -0.007843137f0, 1.0f0) do not lie within this range.
See the READMEs for FixedPointNumbers and ColorTypes for more information.
I am able to identify a simple case where such error may occur (a white pixel surrounded by all black pixels or vice-versa). I tried "fixing" this by attempting to follow the advice here from another stackoverflow question, but I get more errors to the effect of Math on colors is deliberately undefined in ColorTypes, but see the ColorVectorSpace package..
Code attempting to apply solution from the other SO question
function convolve(img::Matrix{<:Any}, kernel)
(half_kernel_w, half_kernel_h) = size(kernel) .÷ 2
(width, height) = size(img)
cpy_im = copy(img)
for row ∈ 1+half_kernel_h:height-half_kernel_h
for col ∈ 1+half_kernel_w:width-half_kernel_w
from_row, to_row = row .+ [-half_kernel_h, half_kernel_h]
from_col, to_col = col .+ [-half_kernel_h, half_kernel_h]
cpy_im[row, col] = sum((kernel .* RGB.(img[from_row:to_row, from_col:to_col] ./ 2 .+ 128)))
end
end
cpy_im
end
Corresponding error
MethodError: no method matching +(::ColorTypes.RGBA{Float32}, ::Int64)
Math on colors is deliberately undefined in ColorTypes, but see the ColorVectorSpace package.
Closest candidates are:
+(::Any, ::Any, !Matched::Any, !Matched::Any...) at operators.jl:591
+(!Matched::T, ::T) where T<:Union{Int128, Int16, Int32, Int64, Int8, UInt128, UInt16, UInt32, UInt64, UInt8} at int.jl:87
+(!Matched::ChainRulesCore.AbstractThunk, ::Any) at ~/.julia/packages/ChainRulesCore/a4mIA/src/tangent_arithmetic.jl:122
Now, I can try using convert etc., but when I look at the big picture, I start to wonder what the idiomatic way of solving this problem in Julia is. And that is my question. If you had to implement convolution by hand from scratch, what would be a good way to do so?
EDIT:
Here is an implementation that works, though it may not be idiomatic
function convolve(img::Matrix{<:Any}, kernel)
(half_kernel_h, half_kernel_w) = size(kernel) .÷ 2
(height, width) = size(img)
cpy_im = copy(img)
# println(Dict("width" => width, "height" => height, "half_kernel_w" => half_kernel_w, "half_kernel_h" => half_kernel_h, "row range" => 1+half_kernel_h:(height-half_kernel_h), "col range" => 1+half_kernel_w:(width-half_kernel_w)))
for row ∈ 1+half_kernel_h:(height-half_kernel_h)
for col ∈ 1+half_kernel_w:(width-half_kernel_w)
from_row, to_row = row .+ (-half_kernel_h, half_kernel_h)
from_col, to_col = col .+ (-half_kernel_w, half_kernel_w)
vals = Dict()
for method ∈ [red, green, blue, alpha]
x = sum((kernel .* method.(img[from_row:to_row, from_col:to_col])))
if x > 1
x = 1
elseif x < 0
x = 0
end
vals[method] = x
end
cpy_im[row, col] = RGBA(vals[red], vals[green], vals[blue], vals[alpha])
end
end
cpy_im
end
First of all, the error
Math on colors is deliberately undefined in ColorTypes, but see the ColorVectorSpace package.
should direct you to read the docs of the ColorVectorSpace package, where you will learn that using ColorVectorSpace will now enable math on RGB types. (The absence of default support it deliberate, because the way the image-processing community treats RGB is colorimetrically wrong. But everyone has agreed not to care, hence the ColorVectorSpace package.)
Second,
ArgumentError: element type FixedPointNumbers.N0f8 is an 8-bit type representing 256 values from 0.0 to 1.0, but the values (-0.0039215684f0, -0.007843137f0, -0.007843137f0, 1.0f0) do not lie within this range.
indicates that you're trying to write negative entries with an element type, N0f8, that can't support such values. Instead of cpy_im = copy(img), consider something like cpy_im = [float(c) for c in img] which will guarantee a floating-point representation that can support negative values.
Third, I would recommend avoiding steps like RGB.(img...) when nothing about your function otherwise addresses whether images are numeric, grayscale, or color. Fundamentally the only operations you need are scalar multiplication and addition, and it's better to write your algorithm generically leveraging only those two properties.
Tim Holy's answer above is correct - keep things simple and avoid relying on third-party packages when you don't need to.
I might point out that another option you may not have considered is to use a different algorithm. What you are implementing is the naive method, whereas many convolution routines using different algorithms for different sizes, such as im2col and Winograd (you can look these two up, I have a website that covers the idea behind both here).
The im2col routine might be worth doing as essentially you can break the routine in several pieces:
Unroll all 'regions' of the image to do a dot-product with the filter/kernel on, and stack them together into a single matrix.
Do a matrix-multiply with the unrolled input and filter/kernel.
Roll the output back into the correct shape.
It might be more complicated overall, but each part is simpler, so you may find this easier to do. A matrix multiply routine is definitely quite easy to implement. For 1x1 (single-pixel) convolutions where the image and filter have the same ordering (i.e. NCHW images and FCHW filter) the first and last steps are trivial as essentially no rolling/unrolling is necessary.
A final word of advice - start simpler and add in the code to handle edge-cases, convolutions are definitely fiddly to work with.
Hope this helps!

Adaptive Fourier Filter

Main question
Have someone already created a free adaptive Fourier filter for Digital Micrograph (or alternatively ImageJ)?
About the adaptive Fourier filter
I want to use some effective filtering processes for my TEM image processing. I came across the adaptive Fourier filtering technique introduced by Möbus et al. in 1993 [1]. In short this is a reciprocal space filtering technique with the workflow:
FFT( Image ) --> Mask * FFT( Image ) --> iFFT( Mask * FFT( Image ) )
The new feature of this filter is that the shape of the filter is adapted to the spectrum of the image and the windows of the mask are automatically placed at all positions which allows
an optimal separation of signal from noise [2].
What have I already tried?
The filter is available in the HREM Filters Pro package from HREM Research https://www.hremresearch.com/Eng/plugin/FiltersEng.html , but my institute does not have a license for this. I have found DM scripts for other filters such as Wiener filters and average background subtracted filters on the DM script database https://www.felmi-zfe.at/dm_script, but there is no adaptive filter.
So what was the question again?
Since I have no experience with DM scripting myself, I would prefer to find or adjust an already existing DM script on adaptive Fourier filtering. Alternatively, I also do some of my image processing in ImageJ, so a script for this program would work as well. Do any of you know whether such scripts already exist?
Sources
[1] Möbus, G., G. Necker, and M. Rühle. "Adaptive Fourier-filtering technique for quantitative evaluation of high-resolution electron micrographs of interfaces." Ultramicroscopy 49.1-4 (1993): 46-65.
[2] Kret, S., et al. "Extracting quantitative information from high resolution electron microscopy." physica status solidi (b) 227.1 (2001): 247-295.
The Adaptive Threshold ImageJ plugin which can be downloaded from:
https://sites.google.com/site/qingzongtseng/adaptivethreshold
is indeed an adaptive filter.
I'm not aware of an (open source) script for this, but a base template for a Fourier-Space filtered script in DigitalMicrograph would be:
// Create and show test image
realimage img := RealImage( "Test Image 2D", 4, 512, 512 )
img = abs( itheta*2*icol/(iwidth+1)* sin(iTheta*10) ) + 15*(irow<iheight/2 ? irow : iheight-irow )/iheight
img = PoissonRandom(100*img)
img.ShowImage()
// Transform to Fourier Space
compleximage img_FFT := FFT(img)
// Create "Mask" or Filter in Fourier Space
// This is where all the "adaptive" things have to happen to create
// the correct mask. The below is just a dummy
image mask := RealImage("Mask",4, 512,512 )
mask = (iradius<iheight/3 && iradius>5 ) ? 1 : 0
mask = SQRT((icol-iwidth/2-100)**2+(irow-iheight/2-50)**2) < 25 ? 0 : mask
mask = SQRT((icol-iwidth/2+100)**2+(irow-iheight/2+50)**2) < 25 ? 0 : mask
mask.ShowImage()
// Apply mask
img_FFT *= mask
img_FFT.SetName( "Masked FFT" )
img_FFT.ShowImage()
// Transform back
image img_filter := modulus(iFFT(img_FFT))
img_filter.SetName( img.GetName() + " Filtered" )
img_filter.ShowImage()
// Just arrange
EGUPerformActionWithAllShownImages("arrange")

Finding the Hue for a pixel in an image

I am trying to convert my rgb image to hsv. I am able to find the value and saturation but got into problem while dealing with hue. I searched for the formula for finding the hue value and got one here.
How do you get the hue of a #xxxxxx colour?
But here also the accepted answer has discussed only 3 options.
R is maximum
G is maximum
B is maximum
(So this is not a duplicate question)
But what about other cases such as
R >= G > B or
B >= G > R or
G >= B > R etc
.
Clearly here there is no one value which is the maximum. So for clearing my doubt I searched google and found the following page:
https://en.wikipedia.org/wiki/Hue
Here a table is given that is used for finding the hue value and 6 possible cases are also given. My question is
what are the values 2,4 or 6 in the formula given in the table and how are they calculated?
Why are only 6 cases possible(as shown in the table)? What about
G > B >= R or
G >= B >= R or
B >= R > G or
B >= R >= G etc.
You could let ImageMagick (installed on most Linux distros and available for OSX and Windows) tell you the answer by creating a single pixel RGB image and converting to HSL colorspace:
convert xc:"#ffffff" -colorspace hsl -format "%[pixel:p{0,0}]" info:
hsl(0%,0%,100%)
or
convert xc:"rgb(127,34,56)" -colorspace hsl -depth 8 txt:
# ImageMagick pixel enumeration: 1,1,255,hsl
0,0: (96.0784%,57.6471%,31.7647%) #F59351 hsl(96.0784%,57.6471%,31.7647%)

What is the MS Office contrast algorithm?

Does anyone know what formula MS Office uses to apply contrast adjustments to image?
It looks like the quadratic function, but I couldn't discover it.
Not too sure what formula they use. I doubt you'll find out either since nothings opensource but here is code I use for contrast adjustment:
function(im, contrast=10){
# Get c-value from contrast
c = (100.0 + contrast) / 100.0
# Apply the contrast
im = ((im-0.5)*c)+0.5
# Cap anything that went outside the bounds of 0 or 1
im[im<0] = 0
im[im>1] = 1
# Return the image
return(im)
}
This works really well for me.
Note
this assumes your pixel intensity values are on a scale of 0 to 1. If on a 255 scale, change lines im = ((im-0.5*255)*c)+0.5*255 and im[im>255] = 255
the function above is in R language
Good luck

Converting RGB to grayscale/intensity

When converting from RGB to grayscale, it is said that specific weights to channels R, G, and B ought to be applied. These weights are: 0.2989, 0.5870, 0.1140.
It is said that the reason for this is different human perception/sensibility towards these three colors. Sometimes it is also said these are the values used to compute NTSC signal.
However, I didn't find a good reference for this on the web. What is the source of these values?
See also these previous questions: here and here.
The specific numbers in the question are from CCIR 601 (see Wikipedia article).
If you convert RGB -> grayscale with slightly different numbers / different methods,
you won't see much difference at all on a normal computer screen
under normal lighting conditions -- try it.
Here are some more links on color in general:
Wikipedia Luma
Bruce Lindbloom 's outstanding web site
chapter 4 on Color in the book by Colin Ware, "Information Visualization", isbn 1-55860-819-2;
this long link to Ware in books.google.com
may or may not work
cambridgeincolor :
excellent, well-written
"tutorials on how to acquire, interpret and process digital photographs
using a visually-oriented approach that emphasizes concept over procedure"
Should you run into "linear" vs "nonlinear" RGB,
here's part of an old note to myself on this.
Repeat, in practice you won't see much difference.
### RGB -> ^gamma -> Y -> L*
In color science, the common RGB values, as in html rgb( 10%, 20%, 30% ),
are called "nonlinear" or
Gamma corrected.
"Linear" values are defined as
Rlin = R^gamma, Glin = G^gamma, Blin = B^gamma
where gamma is 2.2 for many PCs.
The usual R G B are sometimes written as R' G' B' (R' = Rlin ^ (1/gamma))
(purists tongue-click) but here I'll drop the '.
Brightness on a CRT display is proportional to RGBlin = RGB ^ gamma,
so 50% gray on a CRT is quite dark: .5 ^ 2.2 = 22% of maximum brightness.
(LCD displays are more complex;
furthermore, some graphics cards compensate for gamma.)
To get the measure of lightness called L* from RGB,
first divide R G B by 255, and compute
Y = .2126 * R^gamma + .7152 * G^gamma + .0722 * B^gamma
This is Y in XYZ color space; it is a measure of color "luminance".
(The real formulas are not exactly x^gamma, but close;
stick with x^gamma for a first pass.)
Finally,
L* = 116 * Y ^ 1/3 - 16
"... aspires to perceptual uniformity [and] closely matches human perception of lightness." --
Wikipedia Lab color space
I found this publication referenced in an answer to a previous similar question. It is very helpful, and the page has several sample images:
Perceptual Evaluation of Color-to-Grayscale Image Conversions by Martin Čadík, Computer Graphics Forum, Vol 27, 2008
The publication explores several other methods to generate grayscale images with different outcomes:
CIE Y
Color2Gray
Decolorize
Smith08
Rasche05
Bala04
Neumann07
Interestingly, it concludes that there is no universally best conversion method, as each performed better or worse than others depending on input.
Heres some code in c to convert rgb to grayscale.
The real weighting used for rgb to grayscale conversion is 0.3R+0.6G+0.11B.
these weights arent absolutely critical so you can play with them.
I have made them 0.25R+ 0.5G+0.25B. It produces a slightly darker image.
NOTE: The following code assumes xRGB 32bit pixel format
unsigned int *pntrBWImage=(unsigned int*)..data pointer..; //assumes 4*width*height bytes with 32 bits i.e. 4 bytes per pixel
unsigned int fourBytes;
unsigned char r,g,b;
for (int index=0;index<width*height;index++)
{
fourBytes=pntrBWImage[index];//caches 4 bytes at a time
r=(fourBytes>>16);
g=(fourBytes>>8);
b=fourBytes;
I_Out[index] = (r >>2)+ (g>>1) + (b>>2); //This runs in 0.00065s on my pc and produces slightly darker results
//I_Out[index]=((unsigned int)(r+g+b))/3; //This runs in 0.0011s on my pc and produces a pure average
}
Check out the Color FAQ for information on this. These values come from the standardization of RGB values that we use in our displays. Actually, according to the Color FAQ, the values you are using are outdated, as they are the values used for the original NTSC standard and not modern monitors.
What is the source of these values?
The "source" of the coefficients posted are the NTSC specifications which can be seen in Rec601 and Characteristics of Television.
The "ultimate source" are the CIE circa 1931 experiments on human color perception. The spectral response of human vision is not uniform. Experiments led to weighting of tristimulus values based on perception. Our L, M, and S cones1 are sensitive to the light wavelengths we identify as "Red", "Green", and "Blue" (respectively), which is where the tristimulus primary colors are derived.2
The linear light3 spectral weightings for sRGB (and Rec709) are:
Rlin * 0.2126 + Glin * 0.7152 + Blin * 0.0722 = Y
These are specific to the sRGB and Rec709 colorspaces, which are intended to represent computer monitors (sRGB) or HDTV monitors (Rec709), and are detailed in the ITU documents for Rec709 and also BT.2380-2 (10/2018)
FOOTNOTES
(1) Cones are the color detecting cells of the eye's retina.
(2) However, the chosen tristimulus wavelengths are NOT at the "peak" of each cone type - instead tristimulus values are chosen such that they stimulate on particular cone type substantially more than another, i.e. separation of stimulus.
(3) You need to linearize your sRGB values before applying the coefficients. I discuss this in another answer here.
Starting a list to enumerate how different software packages do it. Here is a good CVPR paper to read as well.
FreeImage
#define LUMA_REC709(r, g, b) (0.2126F * r + 0.7152F * g + 0.0722F * b)
#define GREY(r, g, b) (BYTE)(LUMA_REC709(r, g, b) + 0.5F)
OpenCV
nVidia Performance Primitives
Intel Performance Primitives
Matlab
nGray = 0.299F * R + 0.587F * G + 0.114F * B;
These values vary from person to person, especially for people who are colorblind.
is all this really necessary, human perception and CRT vs LCD will vary, but the R G B intensity does not, Why not L = (R + G + B)/3 and set the new RGB to L, L, L?

Resources