Which sigmoid function to use for increasing contrast of an image? - image-processing

Is there a standard sigmoidal function used to increase the contrast in a gray level bitmap?
Currently I am using the following. This would be applied to gray levels represented at values between 0 and 1 inclusive.
static double ContrastCurve(double val, double k = 1)
{
Func<double,double> logistic_func = (double x) => 1.0 / (1.0 + Math.Exp(-k * (x - 0.5)));
var low = logistic_func(0);
var high = logistic_func(1);
var range = high - low;
var value = logistic_func(val);
return (value - low) / range;
}
This is the logistic function applied to a value between 0 and 1 with the output normalized so that the output is also in [0...1]. This function works but it is completely arbitrary, something I just made up, so the k param has no official name or meaning in image processing literature and so forth.
If there is a function that is standard I would prefer that but haven't found anything that seems definitive. Code such as this link seems as ad hoc to me.

As Mark Setchell's comment notes, ImageMagick uses the following function citing "Fundamentals of Image Processing", Hany Farid:
g(u) = 1 / [1 + exp(-α*u + β)]
scaled such that for domain [0..1] its range is [0..1].
This is essentially a two parameter version of the function defined in the code in the question above i.e. the code in the question implements the same function but makes the substitution α = k and β = -k/2 which yields a one parameter function f where f(0.5) = 0.5 when scaled such that f(0) = 0 and f(1) = 1.

Related

Training NN with Julia's Flux - Loss function with derivative of output and functions of output

I want to run this NN in which input is time over some interval. There's no label, and the loss function requires the derivative of the outputs and a specified function (H in my code), which is also a function of the outputs. I believe my loss function is not properly set yet. I also would like to see how the loss function decreases, to see how close to the actual function I am getting, but I don't seem to find a away to see how the loss function progresses.
Here is my new code:
using Flux, Zygote, ForwardDiff
##Data
t=vcat(0:0.1:4)
##Problem parameters
α = 2; C = 1; β = 0.5; P = 1; π₀ = 0.5
#Initial and final conditions
x₀ = 0.5
p₄ = 1
t₀ = 0
t𝔣 = 4
#Hidden layer length
len_hidden=5
X = Chain(Dense(1,len_hidden),Dense(len_hidden,1,relu))
x(t) = (t - t₀)*X([t])[1] + x₀
dxdt(t) = ForwardDiff.derivative(x,t)
Ρ = Chain(Dense(1,len_hidden),Dense(len_hidden,1,relu))
p(t) = p₄ + (t - t𝔣)*Ρ([t])[1]
dpdt(t) = ForwardDiff.derivative(p,t)
U = Chain(Dense(1,len_hidden),Dense(len_hidden,1,relu))
u(t) = U([t])[1]
Θ = Flux.params(X,Ρ,U)
H(x,p,u) = α*u*x - C*u^2 + p*β*x*(1 - x)*(P*u - π₀)
#Partials
dHdx(t) = α*u(t) + p(t)*(1 - x(t))*β*(P*u(t) - π₀) - p(t)*x(t)*(P*u(t) - π₀)
dHdp(t) = (1 - x(t))*x(t)*β*(P*u(t) - π₀)
dHdu(t) = α*x(t) - 2*C*u(t) + P*p(t)*β*x(t)*(1 - x(t))
#Loss function
function loss(t)
return (-dxdt(t) + dHdp(t))^2 + (dpdt(t) + dHdx(t))^2 + (dHdu(t))^2
end
opt=Descent()
parameters=Θ
data=t
Flux.train!(loss, parameters, data, opt, cb = () -> println("Training"))
Is the way I wrote the loss function correct? For each time instant (which is my data vector), am I computing the loss with the updated value of each function in loss()? So far, x(0) becomes the imposed initial condition, however it stays constant for all other time instants, which makes me think the loss is not being evaluated and minimized over time taking in considerations the evolution of all other functions.

Understanding code wrt Logistic Regression using gradient descent

I was following Siraj Raval's videos on logistic regression using gradient descent :
1) Link to longer video :
https://www.youtube.com/watch?v=XdM6ER7zTLk&t=2686s
2) Link to shorter video :
https://www.youtube.com/watch?v=xRJCOz3AfYY&list=PL2-dafEMk2A7mu0bSksCGMJEmeddU_H4D
In the videos he talks about using gradient descent to reduce the error for a set number of iterations so that the function converges(slope becomes zero).
He also illustrates the process via code. The following are the two main functions from the code :
def step_gradient(b_current, m_current, points, learningRate):
b_gradient = 0
m_gradient = 0
N = float(len(points))
for i in range(0, len(points)):
x = points[i, 0]
y = points[i, 1]
b_gradient += -(2/N) * (y - ((m_current * x) + b_current))
m_gradient += -(2/N) * x * (y - ((m_current * x) + b_current))
new_b = b_current - (learningRate * b_gradient)
new_m = m_current - (learningRate * m_gradient)
return [new_b, new_m]
def gradient_descent_runner(points, starting_b, starting_m, learning_rate, num_iterations):
b = starting_b
m = starting_m
for i in range(num_iterations):
b, m = step_gradient(b, m, array(points), learning_rate)
return [b, m]
#The above functions are called below:
learning_rate = 0.0001
initial_b = 0 # initial y-intercept guess
initial_m = 0 # initial slope guess
num_iterations = 1000
[b, m] = gradient_descent_runner(points, initial_b, initial_m, learning_rate, num_iterations)
# code taken from Siraj Raval's github page
Why does the value of b & m continue to update for all the iterations? After a certain number of iterations, the function will converge, when we find the values of b & m that give slope = 0.
So why do we continue iteration after that point and continue updating b & m ?
This way, aren't we losing the 'correct' b & m values? How is learning rate helping the convergence process if we continue to update values after converging? Thus, why is there no check for convergence, and so how is this actually working?
In practice, most likely you will not reach to slope 0 exactly. Thinking of your loss function as a bowl. If your learning rate is too high, it is possible to overshoot over the lowest point of the bowl. On the contrary, if the learning rate is too low, your learning will become too slow and won't reach the lowest point of the bowl before all iterations are done.
That's why in machine learning, the learning rate is an important hyperparameter to tune.
Actually, once we reach a slope 0; b_gradient and m_gradient will become 0;
thus, for :
new_b = b_current - (learningRate * b_gradient)
new_m = m_current - (learningRate * m_gradient)
new_b and new_m will remain the old correct values; as nothing will be subtracted from them.

Algorithm to always sum sliders to 100% failing due to zeroes

This is (supposed to be) a function which makes sure that the the sum of a number of slider's values always adds up to globalTotal.
A slider value can be changed manually by the user to changer.value and then when applying this function to the values of the other sliders, it can determine their new or endVal.
It takes the startVal of the slider which needs changing and the original value of the slider that changed changerStartVal and can determine the new value others by weighting.
The problem and my question is. Sometimes remainingStartVals can be zero (when the slider changing gets moved all the way to maximum) or startVal can be zero (when the slider changing is moved to zero and then another slider is moved). When this happens I get a divide-by-zero or a multiply-by-zero respectively. Both of which are bad and lead to incorrect results. Is there an easy way to fix this?
func calcNewVal(startVal: Float, changerStartVal: Float) -> Float {
let remainingStartVals = globalTotal - changerStartVal
let remainingNewVals = globalTotal - changer.value
let endVal = ((startVal * (100 / remainingStartVals)) / 100) * remainingNewVals
return endVal
}
This is a mathematical problem, not a problem related to Swift or any specific programming language so I'll answer with mathematical formulas and explanations rather than code snippets.
I don't really understand your algorithm either. For example in this line:
let endVal = ((startVal * (100 / remainingStartVals)) / 100) * remainingNewVals
you first multiply by 100 and then divide by 100, so you could just leave all these 100 factors out in the first place!
However, I think I understand what you're trying to achieve and the problem is that there is no generic solution. Before writing an algorithm you have to define exactly how you want it to behave, including all edge cases.
Let's define:
vi as the value of the i-th slider and
Δi as the change of the i-th slider's value
Then you have to think of the following cases:
Case 1:
0 < vi ≤ 1 for all sliders (other than the one you changed)
This is probably the common case you were thinking about. In this case you want to adjust the values of your unchanged sliders so that their total change is equal to the change Δchanged of the slider you changed. In other words:
∑i Δi = 0
If you have 3 sliders this reduces to:
Δ1 + Δ2 + Δ3 = 0
And if the slider that changed is the one with i = 1 then this requirement would read:
Δ1 = – (Δ2 + Δ3)
You want the sliders to adjust proportionally which means that this change Δ1 should not be distributed equally on the other sliders but depending on their current value:
Δ2 = – w2 * Δ1
Δ3 = – w3 * Δ1
The normed weight factors are
w2 = v2 / (v2 + v3)
w3 = v3 / (v2 + v3)
Thus we get:
Δ2 = – v2 / (v2 + v3) * Δ1
Δ3 = – v3 / (v2 + v3) * Δ1
So these are the formulas to applied for this particular case.
However, there are quite a few other cases that don't work with this approach:
Case 2:
vi = 0 for at least one, but not all of the sliders (other than the one you changed)
In this case the approach from case 1 would still work (plus it would be the logical thing to do). However, a slider's value would never change if it's zero. All of the change will be distributed over the sliders with a value > 0.
Case 3:
vi = 0 for all sliders (other than the one you changed)
In this case the proportional change doesn't work because there is simply no information how to distribute the change over the sliders. They're all zero! This is actually your zero division problem: In the case where we have 3 sliders and the slider 1 changes we'll get
v2 + v3 = 0
This is only another manifestation of the fact that the weight factors wi are simply undefined. Thus, you'll have to manually define what will happen in this case.
The most plausible thing to do in this case is to distribute the change evenly over all sliders:
Δi = – (1 / n) * Δ1
where n is the number of sliders (excluding the one that was changed!). With this logic, every slider gets "the same share" of the change.
Now that we're clear with our algorithm you can implement these cases in code. Here some pseudo code as an example:
if sum(valuesOfAllSlidersOtherThanTheSliderThatChanged) == 0 {
for allUnchangedSliders {
// distribute change evenly over the sliders
Δi = – (1 / n) * Δ_changedSlider
}
}
else {
for allUnchangedSliders {
// use weight factor to change proportionally
Δi = – v_i / ∑(v_i) * Δ_changedSlider
}
}
Please be aware that you must cache the values of the current state of your sliders at the beginning or (even better) first compute all the changes and then apply all the changes in a batch. Otherwise you will use a value v2' that you just computed for determining the value v3' which will obviously result in incorrect values.
Hey #Sean the simplest adjustment that I could think of here is to check if the remainingStartVals is not 0 that means that there are weights assigned to the other sliders and also check if a single slider had a weight to begin with which means its startVal shouldn't be equal to 0
func calcNewVal(startVal: Float, changerStartVal: Float) -> Float{
var endVal = 0
let remainingStartVals = globalTotal - changerStartVal
if remainingStartVals != 0 || startVal != 0{
let remainingNewVals = globalTotal - changer.value
endVal = ((startVal * (100 / remainingStartVals)) / 100) * remainingNewVals
}
return endVal
}

Is there an easy way to implement a Optimizer.Maximize() function in TensorFlow

There are several experiments that rely on gradient ascent rather than gradient descent. I have looked into some approaches to using "cost" and the minimize function to simulate the "maximize" function, but I am still not certain I know how to properly implement a maximize() function. Also, in most of these cases, I would say they are closer to an unsupervised learning. So given this code concept for a cost function:
cost = (Yexpected - Ycalculated)^2
train_step = tf.train.AdamOptimizer(0.5).minimize(cost)
I would like to write something were I am following the positive gradient and there may not be a Yexpected value:
maxMe = Function(Ycalculated)
train_step = tf.train.AdamOptimizer(0.5).maximize(maxMe)
A good example of this need is "http://cs229.stanford.edu/proj2009/LvDuZhai.pdf" with Recurrent Reinforcement Learning.
I have read a few papers and references that state changing the sign will flip the direction of movement to increasing gradient, but given TensorFlow's internal calculation of the gradient, I am not sure if this will work to Maximize as I don't know of a way to validate the results:
maxMe = Function(Ycalculated)
train_step = tf.train.AdamOptimizer(0.5).minimize( -1 * maxMe )
The intuition is simple, the minimize() function keeps squashing the given value, for example, if you start with 5, then for every iteration (for example and depending on the learning rate), the value will become say, 4, then 3, then 2, 1, 0 and so on if possible to bring it down more. Now if you pass -5 at the beginning (which is in fact a +5 but you changed the sign explicitly), the gradient will try to change the parameters to bring the number down more, as for example, -5, -6, -7, -8, ...etc. But in fact, the function is increasing because we changed the sign, and the actual sign is (+). In other words, the gradient, in the latter case, is changing the parameters of the neural network in a way that maximizes the function, not minimizing it.
Toy example with arbitrary numbers:
The input x = 1.5, The weight parameter at time (t) w_t = 0.1,
The observed response y = 3.0, The learning rate lr = 0.1.
x * w = 0.15 (this is y predicted for the current w)
loss function = (3.0 - 0.15)^2 = 8.1
Applying gradient descent:
w_(t+1) = w_t - lr * (derivative of loss function with respect to w)
w_(t+1) = 0.1 - (0.1 * [1.5 * 2(0.15 - 3.0)]) = 0.1 - (-0.855) = 0.955
If we use the new w_(t+1) we will have:
1.5 * 0.955 = 1.49 (which is closer to the correct answer 3.0)
and the new loss is: (3.0 - 1.49)^2 = 2.27 (smaller error).
If we keep iterating, we will adjust w to a value that gives us the minimum cost possible.
Now lets repeat the same experiment but with the sign flipped to negative:
loss function = - (3.0 - 0.15)^2 = -8.1
Applying gradient descent:
w_(t+1) = w_t - lr * (derivative of loss function with respect to w)
w_(t+1) = 0.1 - (0.1 * [1.5 * -2(0.15 - 3.0)]) = 0.1 - 0.855 = −0.755
If we apply the new w_(t+1) we will have:
1.5 * −0.755 = −1.1325 and the new loss is: (3.0 - (-1.1325))^2 = 17.07
(the loss function is maximizing!).
That is also applicable to any differentiable function, but this is just a simple naive example to demonstrate the idea.
So, you can do, as you suggested already:
optimizer.minimize( -1 * value)
Or if you like, create a wrapper function (which in fact is needless, but just to mention it):
def maximize(optimizer, value, **kwargs):
return optimizer.minimize(-value, **kwargs)

Generate random numbers with a given distribution

Check out this question:
Swift probability of random number being selected?
The top answer suggests to use a switch statement, which does the job. However, if I have a very large number of cases to consider, the code looks very inelegant; I have a giant switch statement with very similar code in each case repeated over and over again.
Is there a nicer, cleaner way to pick a random number with a certain probability when you have a large number of probabilities to consider? (like ~30)
This is a Swift implementation strongly influenced by the various
answers to Generate random numbers with a given (numerical) distribution.
For Swift 4.2/Xcode 10 and later (explanations inline):
func randomNumber(probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = Double.random(in: 0.0 ..< sum)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerated() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}
Examples:
let x = randomNumber(probabilities: [0.2, 0.3, 0.5])
returns 0 with probability 0.2, 1 with probability 0.3,
and 2 with probability 0.5.
let x = randomNumber(probabilities: [1.0, 2.0])
return 0 with probability 1/3 and 1 with probability 2/3.
For Swift 3/Xcode 8:
func randomNumber(probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = sum * Double(arc4random_uniform(UInt32.max)) / Double(UInt32.max)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerated() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}
For Swift 2/Xcode 7:
func randomNumber(probabilities probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, combine: +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = sum * Double(arc4random_uniform(UInt32.max)) / Double(UInt32.max)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerate() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}
Is there a nicer, cleaner way to pick a random number with a certain probability when you have a large number of probabilities to consider?
Sure. Write a function that generates a number based on a table of probabilities. That's essentially what the switch statement you've pointed to is: a table defined in code. You could do the same thing with data using a table that's defined as a list of probabilities and outcomes:
probability outcome
----------- -------
0.4 1
0.2 2
0.1 3
0.15 4
0.15 5
Now you can pick a number between 0 and 1 at random. Starting from the top of the list, add up probabilities until you've exceeded the number you picked, and use the corresponding outcome. For example, let's say the number you pick is 0.6527637. Start at the top: 0.4 is smaller, so keep going. 0.6 (0.4 + 0.2) is smaller, so keep going. 0.7 (0.6 + 0.1) is larger, so stop. The outcome is 3.
I've kept the table short here for the sake of clarity, but you can make it as long as you like, and you can define it in a data file so that you don't have to recompile when the list changes.
Note that there's nothing particularly specific to Swift about this method -- you could do the same thing in C or Swift or Lisp.
This seems like a good opportunity for a shameless plug to my small library, swiftstats:
https://github.com/r0fls/swiftstats
For example, this would generate 3 random variables from a normal distribution with mean 0 and variance 1:
import SwiftStats
let n = SwiftStats.Distributions.Normal(0, 1.0)
print(n.random())
Supported distributions include: normal, exponential, binomial, etc...
It also supports fitting sample data to a given distribution, using the Maximum Likelihood Estimator for the distribution.
See the project readme for more info.
You could do it with exponential or quadratic functions - have x be your random number, take y as the new random number. Then, you just have to jiggle the equation until it fits your use case. Say I had (x^2)/10 + (x/300). Put your random number in, (as some floating-point form), and then get the floor with Int() when it comes out. So, if my random number generator goes from 0 to 9, I have a 40% chance of getting 0, and a 30% chance of getting 1 - 3, a 20% chance of getting 4 - 6, and a 10% chance of an 8. You're basically trying to fake some kind of normal distribution.
Here's an idea of what it would look like in Swift:
func giveY (x: UInt32) -> Int {
let xD = Double(x)
return Int(xD * xD / 10 + xD / 300)
}
let ans = giveY (arc4random_uniform(10))
EDIT:
I wasn't very clear above - what I meant was you could replace the switch statement with some function that would return a set of numbers with a probability distribution that you could figure out with regression using wolfram or something. So, for the question you linked to, you could do something like this:
import Foundation
func returnLevelChange() -> Double {
return 0.06 * exp(0.4 * Double(arc4random_uniform(10))) - 0.1
}
newItemLevel = oldItemLevel * returnLevelChange()
So that function returns a double somewhere between -0.05 and 2.1. That would be your "x% worse/better than current item level" figure. But, since it's an exponential function, it won't return an even spread of numbers. The arc4random_uniform(10) returns an int from 0 - 9, and each of those would result in a double like this:
0: -0.04
1: -0.01
2: 0.03
3: 0.1
4: 0.2
5: 0.34
6: 0.56
7: 0.89
8: 1.37
9: 2.1
Since each of those ints from the arc4random_uniform has an equal chance of showing up, you get probabilities like this:
40% chance of -0.04 to 0.1 (~ -5% - 10%)
30% chance of 0.2 to 0.56 (~ 20% - 55%)
20% chance of 0.89 to 1.37 (~ 90% - 140%)
10% chance of 2.1 (~ 200%)
Which is something similar to the probabilities that other person had. Now, for your function, it's much more difficult, and the other answers are almost definitely more applicable and elegant. BUT you could still do it.
Arrange each of the letters in order of their probability - from largest to smallest. Then, get their cumulative sums, starting with 0, without the last. (so probabilities of 50%, 30%, 20% becomes 0, 0.5, 0.8). Then you multiply them up until they're integers with reasonable accuracy (0, 5, 8). Then, plot them - your cumulative probabilities are your x's, the things you want to select with a given probability (your letters) are your y's. (you obviously can't plot actual letters on the y axis, so you'd just plot their indices in some array). Then, you'd try find some regression there, and have that be your function. For instance, trying those numbers, I got
e^0.14x - 1
and this:
let letters: [Character] = ["a", "b", "c"]
func randLetter() -> Character {
return letters[Int(exp(0.14 * Double(arc4random_uniform(10))) - 1)]
}
returns "a" 50% of the time, "b" 30% of the time, and "c" 20% of the time. Obviously pretty cumbersome for more letters, and it would take a while to figure out the right regression, and if you wanted to change the weightings you're have to do it manually. BUT if you did find a nice equation that did fit your values, the actual function would only be a couple lines long, and fast.

Resources