Tonal value adjustment sliders, calculating values and positions - image-processing

For processing raw photo data (16 bit linear) I'm programming a GUI interface for tonal value adjustment like in an old version of Photoshop, that I have:
A histogram is displayed, below it three sliders, one on the left for shadows, one on the left for bright tones, and one in the middle, I guess for the gamma. Above the histogram three fields, containing numerical values representing the slider positions.
When I move the left or the right slider, the one in the middle always moves to remain in the middle between the two, and the respective numerical value doesn't change.
When I move the slider in the middle the numerical value (gamma ?) changes, the values and positions of the left and right ones remain unchanged.
I guess the meaning of the values is: the left is the threshold for black, everything below becomes black in the result, for the right one the same happens for white: Everything above the threshold is set to white. The values in between are spread or shrunk to the space in between.
My questions are:
How do I calculate the new gamma value when I move the middle slider?
To which Image::Magick function should I "feed" the three values (lower and upper threshold, gamma value) to get the desired result? -level black_point{,white_point}{%}{,gamma} looks like it's the right one?
EDIT:
I thought the position of the middle slider in PS would somehow be related to the histogram above, but I have looked again and I think that's not the case. It just sets the gamma within a range of 9.9 to 0.1. So I found the answer to my first question.

Related

Right sequence of applying Color Adjustments to Image

I am working on Video Editing application for iOS powered with Metal. I'm implementing Filters and Adjustments for Videos and Images. For now I have all appropriate shaders to change Video's or Image's brightness, exposure, contrast, saturation, hue and vibrance. The part I do not understand is the right order to apply adjustments, because the final result depends on the order (and the results are very different).
For example should I apply
Brightness -> Exposure -> Contrast -> Saturation
or
Exposure -> Contrast -> Brightness -> Saturation
or any other order?
I cannot find any resource online that can clarify the right sequence of applying adjustments. If there is any sequence that is considered right, what it is, and why?
Thanks.
When making image adjustment it's important to avoid R, G, and/or B values of pixels to clip at 0 (left side) or 255 (right side). In an RGB histogram this can be seen as a high spike at 0 and/or 255.
Let's see what four adjustments you mentioned do with the histogram roughly. I describe only a positive adjustment (the negative is simply the reverse).
Exposure: Stretches it out to the right. Unless the histogram has an empty right side, clipping will be guaranteed.
Contrast: Stretches it out to the left and right, but much faster to the left. Unless the histogram has an empty left and/or right side(s), clipping will be guaranteed.
Brightness: Left side is pushed down and right side is pulled up. If you go too far the right side will clip.
Saturation: G will be more or less untouched, R and B peaks will move left or right and can be smeared out. Clipping can occur but not so severely as with the other adjustments.
Note that especially exposure and contrast adjustments have overlapping effects and that brightness does similar things as well.
Based on the above (and my experience as a photographer plus having built the image processing pipeline of the Lapse app), I'd say that to get the best results you should:
First do exposure adjustment, but only if the histogram has an (almost) empty far-right area. Also this adjustment should depend on the size of that empty right side.
Then an empty far-left area of the histogram can be filled by increasing the contrast.
However, if there's an empty area left and right you could skip exposure correction and try to resolve this by increasing contrast.
Now that the histogram is 'filled' from left to right you can make the look a bit darker or lighter by adjusting brightness.
Finally, you can make color adjustments like saturation.
As you can see, which adjustment you apply depends on the image. And, there are many many more adjustments that, when combined, can result on almost the same end result.
Personally I hardly use brightness when I edit images manually.
For some more background on histograms and image adjustments have a look at these fabulous articles from Cambridge in Colour:
Tones & Contrast
Luminosity & Color
And, there's a whole list of related ones.

Contrast stretching over a small range of intensities

What's the use of applying contrast stretching over a small range of intensities in an image (take, for example, the intensity transform below)? I have the same question for contrast shrinking.
Contrast stretching for the entire range of intensities can make the image clearer (so it's benefit is obvious in particular cases). My guess is that stretching the contrast over a small range of intensities makes the areas with those intensities more distinguishable.
Just have a look at the graph you provided (although a graph without axis labels is pretty useless by itself)
The vertical axis of your graph is the output intensity. The horizontal axis is the input.
If we interpret your graph like that we see that a few low input values will be spread (stretched) across a wider range of output values.
The following input values are compressed into a smaller output inverval while teh rest of our input values stays unchanged.
Humans have problems to distinguish between very similar intensity values.
The following black rectangles both show a square on a square.
In the left one its intensity 5 on 1 and on the right its 50 on 10.
So by stretching the interval 1:5 by factor 10 I made the rectangle visible as I increased the contrast.
So if you have information in a small region of gray values you can stretch it to make it more visible to humans. Computers usually don't care.
Shrinking has the opposite effect. You decrease contrast. This makes sense for an intensity range that bears no information. Why waste that interval?
As we only have a limited amount of gray values, we of course can only stretch one section if we sacrifice another one (by shrinking it)

Merging two labels in connect components during the first pass

In connected components labeling, if I see that the pixel to the left and the pixel above the current pixel have the same color but different labels, can't I automatically reassign their labels to be the same (instead of doing with an equivalence table)?
Wikipedia and MathWorks assigns the minimum label to the current pixel but otherwise leave the neighboring pixels the same. Then, they polish the label table with another pass. Unless I'm mistaken my tweak will allow me to label the image uniformly in a single pass. Is there an example in which my little tweak will break the algorithm?
You wouldn't eliminate the second pass. If you did change the labels of the neighboring pixels, what about their neighboring pixels? Basically, if this event happens, you've discovered the two labels are in the same equivalence class; but you'd still have to walk over everything you've examined so far to reassign those labels. You may as well just do that on the second pass and do all the reassigning in one sweep.
Example:
+-+-+-+
|?|?|A|
+-+-+-+
|B|B|x|
+-+-+-+
You examine pixel x, it matches both pixels north and west. Suppose A is the minimum label. So you choose to label the three pixels A, but that won't relabel the other B pixel. You still have to record that A==B, and will still have to sweep through to relabel any B's that remain. Furthermore, you might later find that A itself is equivalent to some other smaller label, and you'd have to relabel all these pixels later.

Algorithm for determining the prominant colour of a photograph

When we look at a photo of a group of trees, we are able to identify that the photo is predominantly green and brown, or for a picture of the sea we are able to identify that it is mostly blue.
Does anyone know of an algorithm that can be used to detect the prominent color or colours in a photo?
I can envisage a 3D clustering algorithm in RGB space or something similar. I was wondering if someone knows of an existing technique.
Convert the image from RGB to a color space with brightness and saturation separated (HSL/HSV)
http://en.wikipedia.org/wiki/HSL_and_HSV
Then find the dominating values for the hue component of each pixel. Make a histogram for the hue values of each pixel and analyze in which angle region the peaks fall in. A large peak in the quadrant between 180 and 270 degrees means there is a large portion of blue in the image, for example.
There can be several difficulties in determining one dominant color. Pathological example: an image whose left half is blue and right half is red. Also, the hue will not deal very well with grayscales obviously. So a chessboard image with 50% white and 50% black will suffer from two problems: the hue is arbitrary for a black/white image, and there are two colors that are exactly 50% of the image.
It sounds like you want to start by computing an image histogram or color histogram of the image. The predominant color(s) will be related to the peak(s) in the histogram.
You might want to change the image from RGB to indexed, then you could use a regular histogram and detect the pics (Matlab does this with rgb2ind(), as you probably already know), and then the problem would be reduced to your regular "finding peaks in an array".
Then
n = hist(Y,nbins) bins the elements in vector Y into 10 equally spaced containers and returns the number of elements in each container as a row vector.
Those values in n will give you how many elements in each bin. Then it's just a matter of fiddling with the number of bins to make them wide enough, and with how many elements in each would make you count said bin as a predominant color, then taking the bins that contain those many elements, calculating the index that corresponds with their middle, and converting it to RGB again.
Whatever you're using for your processing probably has similar functions to those
Average all pixels in the image.
Remove all pixels that are farther away from the average color than standard deviation.
GOTO 1 with remaining pixels until arbitrarily few are left (1 or maybe 1%).
You might also want to pre-process the image, for example apply high-pass filter (removing only very low frequencies) to even out lighting in the photo — http://en.wikipedia.org/wiki/Checker_shadow_illusion

Computer vision for a length ratio

Let's say I take a picture of two hammers side-by-side (although they may be aligned differently, but always one on the right and one on the left), wherein each might look like this, and I want to calculate the ratio of the lengths of the handles of the hammers.
For example, the output from an input image would be the length of the red part of the one on the left (its handle) divided by the length of the handle of the one on the right.
How would I go about doing this?
If you know the handle color it doesn't sound hard. Just select those pixels and take the longer side of a minimum oriented bounding box.
Here are a couple of hints:
Make sure that the bounding boxes of the hammers don't overlap. If you can guarantee this, try this approach:
Scale the image to width=10%, height=10px. Find the largest amount of pixels in background color near the middle of the image. That allows you to separate the two hammers into individual images. Multiply the positions by 10 to transform them back into coordinates of the original image.
Create two images (one for each hammer)
Crop the border
Scale the image to width = 10px, height = 10%. Count all reddish pixels (save the image and examine the pixel values for red and non-red parts to get an idea what to look for)

Resources