Generate random numbers with a given distribution - ios

Check out this question:
Swift probability of random number being selected?
The top answer suggests to use a switch statement, which does the job. However, if I have a very large number of cases to consider, the code looks very inelegant; I have a giant switch statement with very similar code in each case repeated over and over again.
Is there a nicer, cleaner way to pick a random number with a certain probability when you have a large number of probabilities to consider? (like ~30)

This is a Swift implementation strongly influenced by the various
answers to Generate random numbers with a given (numerical) distribution.
For Swift 4.2/Xcode 10 and later (explanations inline):
func randomNumber(probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = Double.random(in: 0.0 ..< sum)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerated() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}
Examples:
let x = randomNumber(probabilities: [0.2, 0.3, 0.5])
returns 0 with probability 0.2, 1 with probability 0.3,
and 2 with probability 0.5.
let x = randomNumber(probabilities: [1.0, 2.0])
return 0 with probability 1/3 and 1 with probability 2/3.
For Swift 3/Xcode 8:
func randomNumber(probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = sum * Double(arc4random_uniform(UInt32.max)) / Double(UInt32.max)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerated() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}
For Swift 2/Xcode 7:
func randomNumber(probabilities probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, combine: +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = sum * Double(arc4random_uniform(UInt32.max)) / Double(UInt32.max)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerate() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}

Is there a nicer, cleaner way to pick a random number with a certain probability when you have a large number of probabilities to consider?
Sure. Write a function that generates a number based on a table of probabilities. That's essentially what the switch statement you've pointed to is: a table defined in code. You could do the same thing with data using a table that's defined as a list of probabilities and outcomes:
probability outcome
----------- -------
0.4 1
0.2 2
0.1 3
0.15 4
0.15 5
Now you can pick a number between 0 and 1 at random. Starting from the top of the list, add up probabilities until you've exceeded the number you picked, and use the corresponding outcome. For example, let's say the number you pick is 0.6527637. Start at the top: 0.4 is smaller, so keep going. 0.6 (0.4 + 0.2) is smaller, so keep going. 0.7 (0.6 + 0.1) is larger, so stop. The outcome is 3.
I've kept the table short here for the sake of clarity, but you can make it as long as you like, and you can define it in a data file so that you don't have to recompile when the list changes.
Note that there's nothing particularly specific to Swift about this method -- you could do the same thing in C or Swift or Lisp.

This seems like a good opportunity for a shameless plug to my small library, swiftstats:
https://github.com/r0fls/swiftstats
For example, this would generate 3 random variables from a normal distribution with mean 0 and variance 1:
import SwiftStats
let n = SwiftStats.Distributions.Normal(0, 1.0)
print(n.random())
Supported distributions include: normal, exponential, binomial, etc...
It also supports fitting sample data to a given distribution, using the Maximum Likelihood Estimator for the distribution.
See the project readme for more info.

You could do it with exponential or quadratic functions - have x be your random number, take y as the new random number. Then, you just have to jiggle the equation until it fits your use case. Say I had (x^2)/10 + (x/300). Put your random number in, (as some floating-point form), and then get the floor with Int() when it comes out. So, if my random number generator goes from 0 to 9, I have a 40% chance of getting 0, and a 30% chance of getting 1 - 3, a 20% chance of getting 4 - 6, and a 10% chance of an 8. You're basically trying to fake some kind of normal distribution.
Here's an idea of what it would look like in Swift:
func giveY (x: UInt32) -> Int {
let xD = Double(x)
return Int(xD * xD / 10 + xD / 300)
}
let ans = giveY (arc4random_uniform(10))
EDIT:
I wasn't very clear above - what I meant was you could replace the switch statement with some function that would return a set of numbers with a probability distribution that you could figure out with regression using wolfram or something. So, for the question you linked to, you could do something like this:
import Foundation
func returnLevelChange() -> Double {
return 0.06 * exp(0.4 * Double(arc4random_uniform(10))) - 0.1
}
newItemLevel = oldItemLevel * returnLevelChange()
So that function returns a double somewhere between -0.05 and 2.1. That would be your "x% worse/better than current item level" figure. But, since it's an exponential function, it won't return an even spread of numbers. The arc4random_uniform(10) returns an int from 0 - 9, and each of those would result in a double like this:
0: -0.04
1: -0.01
2: 0.03
3: 0.1
4: 0.2
5: 0.34
6: 0.56
7: 0.89
8: 1.37
9: 2.1
Since each of those ints from the arc4random_uniform has an equal chance of showing up, you get probabilities like this:
40% chance of -0.04 to 0.1 (~ -5% - 10%)
30% chance of 0.2 to 0.56 (~ 20% - 55%)
20% chance of 0.89 to 1.37 (~ 90% - 140%)
10% chance of 2.1 (~ 200%)
Which is something similar to the probabilities that other person had. Now, for your function, it's much more difficult, and the other answers are almost definitely more applicable and elegant. BUT you could still do it.
Arrange each of the letters in order of their probability - from largest to smallest. Then, get their cumulative sums, starting with 0, without the last. (so probabilities of 50%, 30%, 20% becomes 0, 0.5, 0.8). Then you multiply them up until they're integers with reasonable accuracy (0, 5, 8). Then, plot them - your cumulative probabilities are your x's, the things you want to select with a given probability (your letters) are your y's. (you obviously can't plot actual letters on the y axis, so you'd just plot their indices in some array). Then, you'd try find some regression there, and have that be your function. For instance, trying those numbers, I got
e^0.14x - 1
and this:
let letters: [Character] = ["a", "b", "c"]
func randLetter() -> Character {
return letters[Int(exp(0.14 * Double(arc4random_uniform(10))) - 1)]
}
returns "a" 50% of the time, "b" 30% of the time, and "c" 20% of the time. Obviously pretty cumbersome for more letters, and it would take a while to figure out the right regression, and if you wanted to change the weightings you're have to do it manually. BUT if you did find a nice equation that did fit your values, the actual function would only be a couple lines long, and fast.

Related

Generate all numbers for 0 to 1 using a loop

This question might seem a bit silly..:) But I think I'm missing something somewhere..so bit confused...
I wanted to generate all numbers from 0 to 1. In other words, if I do 1/2, I get 0.5. Then 0.5/2 = 0.25. Then 0.25/2 = 0.125. This will go on until 0.00000001 (A total of 26 divisions)
But I want to generate all numbers in an increasing order from 0.00000001 to 1.
I tried doing something like so...
let first = 0.00000001
let last = 1.0
let interval = first * 2
let sequence = stride(from: first, to: last, by: interval)
for element in sequence {
print(element)
}
But it's not working. It seemed it just prints infinitely...
How can I properly use a for loop and print from 0.00000001 to 1 in a limited number of iterations..? Or any other loops to be used in this case..?
You can't use stride. stride produces an arithmetic sequence with a difference of interval, which is 0.00000002:
0.00000001
0.00000003
0.00000005
0.00000007
...
You want a geometric sequence between 0 and 1.
You could use sequence instead, which generates an infinite sequence:
let first = 0.00000001
let last = 1.0
for item in sequence(first: first, next: { $0 * 2 }).prefix(while: { $0 < last }) {
print(item)
}
{ $0 * 2 } is the function that generates the next element, and prefix(while:) is used to get first elements that satisfy the < last condition.
Here is another way you could approach it. Use stride to count down the powers of 2 from 26 to 0 and divide 1.0 by that power of 2 and display only the first 8 decimal places:
for n in stride(from: 26, through: 0, by: -1) {
print(String(format: "%.8f", 1.0 / pow(2.0, Double(n))))
}
or equivalently (removing the 1/n by using negative exponents):
for n in -26...0 {
print(String(format: "%.8f", pow(2.0, Double(n))))
}
Output:
0.00000001
0.00000003
0.00000006
0.00000012
0.00000024
0.00000048
0.00000095
0.00000191
0.00000381
0.00000763
0.00001526
0.00003052
0.00006104
0.00012207
0.00024414
0.00048828
0.00097656
0.00195312
0.00390625
0.00781250
0.01562500
0.03125000
0.06250000
0.12500000
0.25000000
0.50000000
1.00000000

Algorithm to always sum sliders to 100% failing due to zeroes

This is (supposed to be) a function which makes sure that the the sum of a number of slider's values always adds up to globalTotal.
A slider value can be changed manually by the user to changer.value and then when applying this function to the values of the other sliders, it can determine their new or endVal.
It takes the startVal of the slider which needs changing and the original value of the slider that changed changerStartVal and can determine the new value others by weighting.
The problem and my question is. Sometimes remainingStartVals can be zero (when the slider changing gets moved all the way to maximum) or startVal can be zero (when the slider changing is moved to zero and then another slider is moved). When this happens I get a divide-by-zero or a multiply-by-zero respectively. Both of which are bad and lead to incorrect results. Is there an easy way to fix this?
func calcNewVal(startVal: Float, changerStartVal: Float) -> Float {
let remainingStartVals = globalTotal - changerStartVal
let remainingNewVals = globalTotal - changer.value
let endVal = ((startVal * (100 / remainingStartVals)) / 100) * remainingNewVals
return endVal
}
This is a mathematical problem, not a problem related to Swift or any specific programming language so I'll answer with mathematical formulas and explanations rather than code snippets.
I don't really understand your algorithm either. For example in this line:
let endVal = ((startVal * (100 / remainingStartVals)) / 100) * remainingNewVals
you first multiply by 100 and then divide by 100, so you could just leave all these 100 factors out in the first place!
However, I think I understand what you're trying to achieve and the problem is that there is no generic solution. Before writing an algorithm you have to define exactly how you want it to behave, including all edge cases.
Let's define:
vi as the value of the i-th slider and
Δi as the change of the i-th slider's value
Then you have to think of the following cases:
Case 1:
0 < vi ≤ 1 for all sliders (other than the one you changed)
This is probably the common case you were thinking about. In this case you want to adjust the values of your unchanged sliders so that their total change is equal to the change Δchanged of the slider you changed. In other words:
∑i Δi = 0
If you have 3 sliders this reduces to:
Δ1 + Δ2 + Δ3 = 0
And if the slider that changed is the one with i = 1 then this requirement would read:
Δ1 = – (Δ2 + Δ3)
You want the sliders to adjust proportionally which means that this change Δ1 should not be distributed equally on the other sliders but depending on their current value:
Δ2 = – w2 * Δ1
Δ3 = – w3 * Δ1
The normed weight factors are
w2 = v2 / (v2 + v3)
w3 = v3 / (v2 + v3)
Thus we get:
Δ2 = – v2 / (v2 + v3) * Δ1
Δ3 = – v3 / (v2 + v3) * Δ1
So these are the formulas to applied for this particular case.
However, there are quite a few other cases that don't work with this approach:
Case 2:
vi = 0 for at least one, but not all of the sliders (other than the one you changed)
In this case the approach from case 1 would still work (plus it would be the logical thing to do). However, a slider's value would never change if it's zero. All of the change will be distributed over the sliders with a value > 0.
Case 3:
vi = 0 for all sliders (other than the one you changed)
In this case the proportional change doesn't work because there is simply no information how to distribute the change over the sliders. They're all zero! This is actually your zero division problem: In the case where we have 3 sliders and the slider 1 changes we'll get
v2 + v3 = 0
This is only another manifestation of the fact that the weight factors wi are simply undefined. Thus, you'll have to manually define what will happen in this case.
The most plausible thing to do in this case is to distribute the change evenly over all sliders:
Δi = – (1 / n) * Δ1
where n is the number of sliders (excluding the one that was changed!). With this logic, every slider gets "the same share" of the change.
Now that we're clear with our algorithm you can implement these cases in code. Here some pseudo code as an example:
if sum(valuesOfAllSlidersOtherThanTheSliderThatChanged) == 0 {
for allUnchangedSliders {
// distribute change evenly over the sliders
Δi = – (1 / n) * Δ_changedSlider
}
}
else {
for allUnchangedSliders {
// use weight factor to change proportionally
Δi = – v_i / ∑(v_i) * Δ_changedSlider
}
}
Please be aware that you must cache the values of the current state of your sliders at the beginning or (even better) first compute all the changes and then apply all the changes in a batch. Otherwise you will use a value v2' that you just computed for determining the value v3' which will obviously result in incorrect values.
Hey #Sean the simplest adjustment that I could think of here is to check if the remainingStartVals is not 0 that means that there are weights assigned to the other sliders and also check if a single slider had a weight to begin with which means its startVal shouldn't be equal to 0
func calcNewVal(startVal: Float, changerStartVal: Float) -> Float{
var endVal = 0
let remainingStartVals = globalTotal - changerStartVal
if remainingStartVals != 0 || startVal != 0{
let remainingNewVals = globalTotal - changer.value
endVal = ((startVal * (100 / remainingStartVals)) / 100) * remainingNewVals
}
return endVal
}

Generate weighted random number in Swift [duplicate]

Check out this question:
Swift probability of random number being selected?
The top answer suggests to use a switch statement, which does the job. However, if I have a very large number of cases to consider, the code looks very inelegant; I have a giant switch statement with very similar code in each case repeated over and over again.
Is there a nicer, cleaner way to pick a random number with a certain probability when you have a large number of probabilities to consider? (like ~30)
This is a Swift implementation strongly influenced by the various
answers to Generate random numbers with a given (numerical) distribution.
For Swift 4.2/Xcode 10 and later (explanations inline):
func randomNumber(probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = Double.random(in: 0.0 ..< sum)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerated() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}
Examples:
let x = randomNumber(probabilities: [0.2, 0.3, 0.5])
returns 0 with probability 0.2, 1 with probability 0.3,
and 2 with probability 0.5.
let x = randomNumber(probabilities: [1.0, 2.0])
return 0 with probability 1/3 and 1 with probability 2/3.
For Swift 3/Xcode 8:
func randomNumber(probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = sum * Double(arc4random_uniform(UInt32.max)) / Double(UInt32.max)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerated() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}
For Swift 2/Xcode 7:
func randomNumber(probabilities probabilities: [Double]) -> Int {
// Sum of all probabilities (so that we don't have to require that the sum is 1.0):
let sum = probabilities.reduce(0, combine: +)
// Random number in the range 0.0 <= rnd < sum :
let rnd = sum * Double(arc4random_uniform(UInt32.max)) / Double(UInt32.max)
// Find the first interval of accumulated probabilities into which `rnd` falls:
var accum = 0.0
for (i, p) in probabilities.enumerate() {
accum += p
if rnd < accum {
return i
}
}
// This point might be reached due to floating point inaccuracies:
return (probabilities.count - 1)
}
Is there a nicer, cleaner way to pick a random number with a certain probability when you have a large number of probabilities to consider?
Sure. Write a function that generates a number based on a table of probabilities. That's essentially what the switch statement you've pointed to is: a table defined in code. You could do the same thing with data using a table that's defined as a list of probabilities and outcomes:
probability outcome
----------- -------
0.4 1
0.2 2
0.1 3
0.15 4
0.15 5
Now you can pick a number between 0 and 1 at random. Starting from the top of the list, add up probabilities until you've exceeded the number you picked, and use the corresponding outcome. For example, let's say the number you pick is 0.6527637. Start at the top: 0.4 is smaller, so keep going. 0.6 (0.4 + 0.2) is smaller, so keep going. 0.7 (0.6 + 0.1) is larger, so stop. The outcome is 3.
I've kept the table short here for the sake of clarity, but you can make it as long as you like, and you can define it in a data file so that you don't have to recompile when the list changes.
Note that there's nothing particularly specific to Swift about this method -- you could do the same thing in C or Swift or Lisp.
This seems like a good opportunity for a shameless plug to my small library, swiftstats:
https://github.com/r0fls/swiftstats
For example, this would generate 3 random variables from a normal distribution with mean 0 and variance 1:
import SwiftStats
let n = SwiftStats.Distributions.Normal(0, 1.0)
print(n.random())
Supported distributions include: normal, exponential, binomial, etc...
It also supports fitting sample data to a given distribution, using the Maximum Likelihood Estimator for the distribution.
See the project readme for more info.
You could do it with exponential or quadratic functions - have x be your random number, take y as the new random number. Then, you just have to jiggle the equation until it fits your use case. Say I had (x^2)/10 + (x/300). Put your random number in, (as some floating-point form), and then get the floor with Int() when it comes out. So, if my random number generator goes from 0 to 9, I have a 40% chance of getting 0, and a 30% chance of getting 1 - 3, a 20% chance of getting 4 - 6, and a 10% chance of an 8. You're basically trying to fake some kind of normal distribution.
Here's an idea of what it would look like in Swift:
func giveY (x: UInt32) -> Int {
let xD = Double(x)
return Int(xD * xD / 10 + xD / 300)
}
let ans = giveY (arc4random_uniform(10))
EDIT:
I wasn't very clear above - what I meant was you could replace the switch statement with some function that would return a set of numbers with a probability distribution that you could figure out with regression using wolfram or something. So, for the question you linked to, you could do something like this:
import Foundation
func returnLevelChange() -> Double {
return 0.06 * exp(0.4 * Double(arc4random_uniform(10))) - 0.1
}
newItemLevel = oldItemLevel * returnLevelChange()
So that function returns a double somewhere between -0.05 and 2.1. That would be your "x% worse/better than current item level" figure. But, since it's an exponential function, it won't return an even spread of numbers. The arc4random_uniform(10) returns an int from 0 - 9, and each of those would result in a double like this:
0: -0.04
1: -0.01
2: 0.03
3: 0.1
4: 0.2
5: 0.34
6: 0.56
7: 0.89
8: 1.37
9: 2.1
Since each of those ints from the arc4random_uniform has an equal chance of showing up, you get probabilities like this:
40% chance of -0.04 to 0.1 (~ -5% - 10%)
30% chance of 0.2 to 0.56 (~ 20% - 55%)
20% chance of 0.89 to 1.37 (~ 90% - 140%)
10% chance of 2.1 (~ 200%)
Which is something similar to the probabilities that other person had. Now, for your function, it's much more difficult, and the other answers are almost definitely more applicable and elegant. BUT you could still do it.
Arrange each of the letters in order of their probability - from largest to smallest. Then, get their cumulative sums, starting with 0, without the last. (so probabilities of 50%, 30%, 20% becomes 0, 0.5, 0.8). Then you multiply them up until they're integers with reasonable accuracy (0, 5, 8). Then, plot them - your cumulative probabilities are your x's, the things you want to select with a given probability (your letters) are your y's. (you obviously can't plot actual letters on the y axis, so you'd just plot their indices in some array). Then, you'd try find some regression there, and have that be your function. For instance, trying those numbers, I got
e^0.14x - 1
and this:
let letters: [Character] = ["a", "b", "c"]
func randLetter() -> Character {
return letters[Int(exp(0.14 * Double(arc4random_uniform(10))) - 1)]
}
returns "a" 50% of the time, "b" 30% of the time, and "c" 20% of the time. Obviously pretty cumbersome for more letters, and it would take a while to figure out the right regression, and if you wanted to change the weightings you're have to do it manually. BUT if you did find a nice equation that did fit your values, the actual function would only be a couple lines long, and fast.

logistic regression with gradient descent error

I am trying to implement logistic regression with gradient descent,
I get my Cost function j_theta for the number of iterations and fortunately my j_theta is decreasing when plotted j_theta against the number of iteration.
The data set I use is given below:
x=
1 20 30
1 40 60
1 70 30
1 50 50
1 50 40
1 60 40
1 30 40
1 40 50
1 10 20
1 30 40
1 70 70
y= 0
1
1
1
0
1
0
0
0
0
1
The code that I managed to write for logistic regression using Gradient descent is:
%1. The below code would load the data present in your desktop to the octave memory
x=load('stud_marks.dat');
%y=load('ex4y.dat');
y=x(:,3);
x=x(:,1:2);
%2. Now we want to add a column x0 with all the rows as value 1 into the matrix.
%First take the length
[m,n]=size(x);
x=[ones(m,1),x];
X=x;
% Now we limit the x1 and x2 we need to leave or skip the first column x0 because they should stay as 1.
mn = mean(x);
sd = std(x);
x(:,2) = (x(:,2) - mn(2))./ sd(2);
x(:,3) = (x(:,3) - mn(3))./ sd(3);
% We will not use vectorized technique, Because its hard to debug, We shall try using many for loops rather
max_iter=50;
theta = zeros(size(x(1,:)))';
j_theta=zeros(max_iter,1);
for num_iter=1:max_iter
% We calculate the cost Function
j_cost_each=0;
alpha=1;
theta
for i=1:m
z=0;
for j=1:n+1
% theta(j)
z=z+(theta(j)*x(i,j));
z
end
h= 1.0 ./(1.0 + exp(-z));
j_cost_each=j_cost_each + ( (-y(i) * log(h)) - ((1-y(i)) * log(1-h)) );
% j_cost_each
end
j_theta(num_iter)=(1/m) * j_cost_each;
for j=1:n+1
grad(j) = 0;
for i=1:m
z=(x(i,:)*theta);
z
h=1.0 ./ (1.0 + exp(-z));
h
grad(j) += (h-y(i)) * x(i,j);
end
grad(j)=grad(j)/m;
grad(j)
theta(j)=theta(j)- alpha * grad(j);
end
end
figure
plot(0:1999, j_theta(1:2000), 'b', 'LineWidth', 2)
hold off
figure
%3. In this step we will plot the graph for the given input data set just to see how is the distribution of the two class.
pos = find(y == 1); % This will take the postion or array number from y for all the class that has value 1
neg = find(y == 0); % Similarly this will take the position or array number from y for all class that has value 0
% Now we plot the graph column x1 Vs x2 for y=1 and y=0
plot(x(pos, 2), x(pos,3), '+');
hold on
plot(x(neg, 2), x(neg, 3), 'o');
xlabel('x1 marks in subject 1')
ylabel('y1 marks in subject 2')
legend('pass', 'Failed')
plot_x = [min(x(:,2))-2, max(x(:,2))+2]; % This min and max decides the length of the decision graph.
% Calculate the decision boundary line
plot_y = (-1./theta(3)).*(theta(2).*plot_x +theta(1));
plot(plot_x, plot_y)
hold off
%%%%%%% The only difference is In the last plot I used X where as now I use x whose attributes or features are featured scaled %%%%%%%%%%%
If you view the graph of x1 vs x2 the graph would look like,
After I run my code I create a decision boundary. The shape of the decision line seems to be okay but it is a bit displaced. The graph of the x1 vs x2 with decision boundary is given below:
![enter image description here][2]
Please suggest me where am I going wrong ....
Thanks:)
The New Graph::::
![enter image description here][1]
If you see the new graph the coordinated of x axis have changed ..... Thats because I use x(feature scalled) instead of X.
The problem lies in your cost function calculation and/or gradient calculation, your plotting function is fine. I ran your dataset on the algorithm I implemented for logistic regression but using the vectorized technique because in my opinion it is easier to debug.
The final values I got for theta were
theta =
[-76.4242,
0.8214,
0.7948]
I also used alpha = 0.3
I plotted the decision boundary and it looks fine, I would recommend using the vectorized form as it is easier to implement and to debug in my opinion.
I also think your implementation of gradient descent is not quite correct. 50 iterations is just not enough and the cost at the last iteration is not good enough. Maybe you should try to run it for more iterations with a stopping condition.
Also check this lecture for optimization techniques.
https://class.coursera.org/ml-006/lecture/37

Scaling a number between two values

If I am given a floating point number but do not know beforehand what range the number will be in, is it possible to scale that number in some meaningful way to be in another range? I am thinking of checking to see if the number is in the range 0<=x<=1 and if not scale it to that range and then scale it to my final range. This previous post provides some good information, but it assumes the range of the original number is known beforehand.
You can't scale a number in a range if you don't know the range.
Maybe what you're looking for is the modulo operator. Modulo is basically the remainder of division, the operator in most languages is is %.
0 % 5 == 0
1 % 5 == 1
2 % 5 == 2
3 % 5 == 3
4 % 5 == 4
5 % 5 == 0
6 % 5 == 1
7 % 5 == 2
...
Sure it is not possible. You can define range and ignore all extrinsic values. Or, you can collect statistics to find range in run time (i.e. via histogram analysis).
Is it really about image processing? There are lots of related problems in image segmentation field.
You want to scale a single random floating point number to be between 0 and 1, but you don't know the range of the number?
What should 99.001 be scaled to? If the range of the random number was [99, 100], then our scaled-number should be pretty close to 0. If the range of the random number was [0, 100], then our scaled-number should be pretty close to 1.
In the real world, you always have some sort of information about the range (either the range itself, or how wide it is). Without further info, the answer is "No, it can't be done."
I think the best you can do is something like this:
int scale(x) {
if (x < -1) return 1 / x - 2;
if (x > 1) return 2 - 1 / x;
return x;
}
This function is monotonic, and has a range of -2 to 2, but it's not strictly a scaling.
I am assuming that you have the result of some 2-dimensional measurements and want to display them in color or grayscale. For that, I would first want to find the maximum and minimum and then scale between these two values.
static double[][] scale(double[][] in, double outMin, double outMax) {
double inMin = Double.POSITIVE_INFINITY;
double inMax = Double.NEGATIVE_INFINITY;
for (double[] inRow : in) {
for (double d : inRow) {
if (d < inMin)
inMin = d;
if (d > inMax)
inMax = d;
}
}
double inRange = inMax - inMin;
double outRange = outMax - outMin;
double[][] out = new double[in.length][in[0].length];
for (double[] inRow : in) {
double[] outRow = new double[inRow.length];
for (int j = 0; j < inRow.length; j++) {
double normalized = (inRow[j] - inMin) / inRange; // 0 .. 1
outRow[j] = outMin + normalized * outRange;
}
}
return out;
}
This code is untested and just shows the general idea. It further assumes that all your input data is in a "reasonable" range, away from infinity and NaN.

Resources