Preserving negative values after filtering - ios

Consider the really simple difference kernel
kernel vec4 diffKernel(__sample image1, __sample image2)
{
return vec4(image1.rgb - image2.rgb, 1.0);
}
When used as a CIColorKernel, this produces the difference between two images. However, any valus for which image1.rgb < image2.rgb (pointwise) will be forced to zero due to the "clamping" nature of the outputs of kernels in CIKernel.
For many image processing algorithms, such as those involving image pyramids (see my other question on how this can be achieved in Core Image), it is important to preserve these negative values for later use (reconstructing the pyramid, for example). If 0's re used in their place, you will actually get an incorrect output.
I've seen that one such way is to just store abs(image1.rgb - image2.rgb) make a new image, who's RGB values store 0 or 1 whether a negative sign is attached to that value, then do a multiply blend weighted with -1 to the correct places.
What are some other such ways one can store the sign of a pixel value? Perhaps we can use the alpha channel if it being unused?

I actually ended up figuring this out -- you can use an option in CIContext to make sure that things are computed using the kCIFormatAf key. This means that any calculations done on that context will be done in a floating point precision, so that values beyond the scope of [0,1] are preserved from one filter to the next!

Related

OpenCV: How to use free scaling parameter (alpha) when dealing with getOptimalNewCameraMatrix and stereoRectify?

How to use free scaling parameter (alpha) when dealing with getOptimalNewCameraMatrix and stereoRectify : should one use the same value ?
As far as I understand it, I guess a few things that led me to this question are worth to be listed:
In getOptimalNewCameraMatrix, OpenCV doc says "alpha Free scaling parameter between 0 (when all the pixels in the undistorted image are valid) and 1 (when all the source image pixels are retained in the undistorted image)" [sounds to me like 1 = retain source pixels = minimize loss]
In stereoRectify, OpenCV doc says "alpha Free scaling parameter.... alpha=0 means that ... (no black areas after rectification). alpha=1 means that ... (no source image pixels are lost)
So in the end alpha, seems to be a parameter that may "act" the same way ? (1 = no source pixel lost - sounds like, not sure here)
As far as I understand it, after calibrateCamera, one may want to call getOptimalNewCameraMatrix (computing new matrices as outputs) and then stereoRectify (using new computed matrices as inputs) : do one want to use the same alpha?
Are these 2 alphas the same? Or does one want to use 2 different alphas?
The alphas are the same.
The choice of value depends entirely on the application. Ask yourself:
Does the application need to see all the input pixels to do its job (because, for example, it must use all the "extra" FOV near the image edges, or because you know that the scene's subject that's of interest to the application may be near the edges and you can't lose even a pixel of it)?
Yes: choose alpha=1
No: choose a value of alpha that keeps the "interesting" portion of
the image predictably inside the undistorted image.
In the latter case (again, depending on the application) you may need to compute the boundary of the undistorted image within the input one. This is just a poly-curve, that can be be approximated by a polygon to any level of accuracy you need, down to the pixel. Or you can use a mask.

Scaling y-axis of a histogram

I am creating a histogram of an image. I need a way to scale it in y-axis to represent it nicely, as standard image/video processing programs do. Thus I need to make stronger the small values, to make weaker the big values.
What I tried to do so far:
To scale the y-values by dividing them by the greatest y value. It allowed me to see it, but still small values are almost indistinguishable from zero.
What I have seen:
In a standard video processing tool let's say three biggest values have the same y-values on their histogram representation. However, real values are different. And the small values are amplified on the histogram.
I would be thankful for the tips/formula/algorithm.
You can create lookup table (LUT), fill it with values from a curve that describes desired behavior. Seems that you want something like gamma curve
for i in MaxValue range
LUT[i] = MaxValue(255?) * Power(i/MaxValue, gamma)
To apply it:
for every pixel
NewValue = LUT[OldValue]

Pass histogram equalized image to second pass hist eq?

Suppose that a digital image is subjected to histogram equalization. The problem is the following: Show that a second pass histogram equalization (on the histogram equalized image) will produce exactly the same result as the first pass?
Here is the solution :- problem 3.7
I am not able to understand the following part of the answer: because every pixel (and no others) with value r_k is mapped to s_k, n_{s_k}=n_{r_k}.
This fact stems from the assumptions made on the transformation T representing the histogram equalization operation:
a. T is single-valued and monotonically increasing;
b. T(r) is in [0, 1] for every r in [0,1].
See the corresponding chapter in "Digital Image Processing" by Gonzalez and Woods. In particular, it can be verified that these two assumptions hold for the discrete histogram normalization (equations see Problem 3.7) - at least if none of the intensity values is 0, because then T is strictly monotonic implying a one-to-one mapping; this in turn implies the fact that only every pixel with value r_k is mapped to s_k and no other intensity values. This is done in the solution of Problem 3.10 in the PDF you provided. If one (or more) intensity value(s) is (are) 0, this might not be the case anymore. However, in the continuous domain (in which histogram equalization is usually derived), both assumptions hold, as is shown in the corresponding Chapter in the book by Gonzalez and Woods.

Difference between Texture2D and Texture2DMS in DirectX11

I'm using SharpDX and I want to do antialiasing in the Depth buffer. I need to store the Depth Buffer as a texture to use it later. So is it a good idea if this texture is a Texture2DMS? Or should I take another approach?
What I really want to achieve is:
1) Depth buffer scaling
2) Depth test supersampling
(terms I found in section 3.2 of this paper: http://gfx.cs.princeton.edu/pubs/Cole_2010_TFM/cole_tfm_preprint.pdf
The paper calls for a depth pre-pass. Since this pass requires no color, you should leave the render target unbound, and use an "empty" pixel shader. For depth, you should create a Texture2D (not MS) at 2x or 4x (or some other 2Nx) the width and height of the final render target that you're going to use. This isn't really "supersampling" (since the pre-pass is an independent phase with no actual pixel output) but it's similar.
For the second phase, the paper calls for doing multiple samples of the high-resolution depth buffer from the pre-pass. If you followed the sizing above, every pixel will correspond to some (2N)^2 depth values. You'll need to read these values and average them. Fortunately, there's a hardware-accelerated way to do this (called PCF) using SampleCmp with a COMPARISON sampler type. This samples a 2x2 stamp, compares each value to a specified value (pass in the second-phase calculated depth here, and don't forget to add some epsilon value (e.g. 1e-5)), and returns the averaged result. Do 2x2 stamps to cover the entire area of the first-phase depth buffer associated with this pixel, and average the results. The final result represents how much of the current line's spine corresponds to the foremost depth of the pre-pass. Because of the PCF's smooth filtering behavior, as lines become visible, they will slowly fade in, as opposed to the aliased "dotted" line effect described in the paper.

Photoshop's RGB levels with ImageMagick

I'm attempting to convert some effects created in Photoshop into code for use with php/imagemagick. Right now I'm specifically interested in how to recreate Photoshop's RGB levels feature. I'm not really familiar with the Photoshop interface, but this is the info that I am given:
RGB Level Adjust
Input levels: Shadow 0, Midtone 0.92, Highlight 255
Output levels: Shadow 0, Highlight 255
What exaclty are the input levels vs. the output levels? How would I translate this into ImageMagick? Below you can see what I have tried, but it does not correctly render the desired effect (converting Photoshop's 0-255 scale to 0-65535):
$im->levelImage(0, 0.92, 65535);
$im->levelImage(0, 1, 65535);
This was mostly a stab in the dark since the parameter names don't line up and for output levels the number of parameters don't even match. Basically I don't understand exactly what is going on when photoshop applies the adjustment. I think that's my biggest hurdle right now. Once I get that, I'll need to find corresponding effects in ImageMagick.
Can anyone shed some light on what's going on in Photoshop and how to replicate that with ImageMagick?
Shadows, Midtones and Highlights are colors that fall within a certain range of luminosity. For example, shadows are the lower range of the luminosity histogram, midtones are colors in the middle and highlights are the ones up high. However - you can't use a hard limit on these values, which is why you will have to use curves like these that weight the histogram (a pixel may lie in multiple ranges at the same time).
To adjust shadows, midtones and highlights separately, you will need to create a weighted sum per pixel that uses the current shadow, midtone and highlight values to create a resultant value.
I don't think you can do this directly using ImageMagick API's - perhaps you could simply write it as a filter.
Hope this helps.
So I stumbled across this website: http://www.fmwconcepts.com/imagemagick/levels/index.php
Based on the information there, I was able to come up with the following php which seems pretty effective at simulating what Photoshop does with input and output and all that.
function levels($im, $inshadow, $midtone, $inhighlight, $outshadow, $outhighlight, $channel = self::CHANNEL_ALL) {
$im->levelImage($inshadow, $midtone, $inhighlight, $channel);
$im->levelImage(-$outshadow, 1.0, 255 + (255 - $outhighlight), $channel);
}
This assumes that the parameters to levelImage for blackpoint and whitepoint are on a scale of 0-255. They might actually be 0-65535 on your system. If that's they can it's easy enough to adjust it. You can also check what value your setup uses with $im->getQuantumRange(). It will return an array with a string version and a long version of the quantum. From there it should be easy enough to normalize the values or just use the new range.
See the documentation: The first value is the black point (shadow) input value, the middle is a gamma (which I'm guessing is the same as Photoshop's midpoint), and the last is the white point (highlight) input value.
The output values are fixed at the quantum values of the image type, there's no need to specify them.

Resources