How to convert a tuple of numbers to float in F#? - f#

Suppose I have a tuple of numbers:
let mynum = (3, 5, 8.9, 45, 127.3)
It's mixed int with float. In order to do calculations like average, I have to convert them to float. How to do the conversion?

I don't know how did you end up with that tuple, I would advise to review your design. I personally consider tuples of more than 4 elements a smell, may be a record with named elements would be a best fit.
Anyway you can convert it easily to a list of float and then calculate the average:
let mynum = (3, 5, 8.9, 45, 127.3)
let inline cnv (x1, x2, x3, x4, x5) = [float x1; float x2; float x3; float x4; float x5]
let lst = cnv mynum // float list = [3.0; 5.0; 8.9; 45.0; 127.3]
List.average lst // float = 37.84

How did the input turn out in this format? Tuples aren't supposed to be used in this way; they are intended for combinations of a few objects, which are strongly and independently typed.
The type of a tuple changes with its length, so there is no straightforward way to perform sequence operations on them. They are not compatible with seq<'T> anyway, since they the types of their components are unrelated. There is no average function for tuples, and if there were, it would have to be overloaded for all possible arities (number of components).
You might want to import the data into another collection, such as a list, a set, an array, or another type of sequence, and have the importer handle conversions where necessary. For example, if the input were a list of strings (as taken from a text file or such), System.Double.Parse or, as pointed out by ildjarn in the comments, the float operator, can be used to turn them into floats:
let input = ["3"; "5"; "8.9"; "45"; "127.3"]
List.map float input
This returns [3.0; 5.0; 8.9; 45.0; 127.3], which is of type float list: an immutable, singly-linked list of double-precision floats.

Related

A working function but having trouble with a particular float value

This function takes a float then spits out the two integers for the decimal value. At least that was the intention
let flr (x:float) = float(int(x))
let f x =
let r y = let x = x * y in x = flr(x)
let rec f y =
if r(y)
then x*y,y
else f(y+1.0)
f 1.0
f 0.2;;
val it: float * float = (1.0, 5.0)
f 3.14;;
val it: float * float = (157.0, 50.0)
Here is an example where the integers, er will be integers eventually rather, have not been "simplified"
f 0.14;;
val it: float * float = (35.0, 250.0)
Checking the fractional part to be less than .01, as opposed to equaling exactly zero, got around this issue but I don't really like that solution. So I set it back to what you see in the code above. I am using the function below for some of the values that do not simplify though:
let g (x,y) =
let rec f k =
if x/k = flr(x/k)
then g(k)
else f(k-1.0)
and g k =
if y/k = flr(y/k)
then x/k,y/k
else f(k-1.0)
if x < y then f x else f y
Anyway, the main issue is this value:
3.142857143
Homeboy just keeps grinding without stack errors and I'm not sure what I've ran into here. Any clarity would be awesome! Thanks y'all.
Your algorithm is trying to find a rational number to represent a decimal number (represented as a floating point number).
For any input x, you are looking for a number represented as p/q such that x=p/q and you do this by incrementing q, starting from 1 and checking if you can find an integer p to make this work.
This works fine for numbers that have a nice rational representation like 0.2, but it does not work great for numbers like 3.142857 that do not have a simpler rational representation. For 3.142857, you will just keep iterating until you reach 3142857/1000000 (which is technically correct, but not very helpful).
As mentioned in the comments, there are issues caused by the fact that floating-point numbers cannot be precisely compared, but also, iterating like this for 3.142857143 might just take too long.
You can look up better algorithms for finding a rational number for a given decimal. You could also see if you can accept some margin of error. If you do not need a completely precise solution, you could for example change your r test function to something like:
let r y =
let x = x * y
x < flr(x) + 0.0001 && x > flr(x) + 0.0001
This will not give you exactly the same number, but it will likely find a solution that is good enough.

What are units in neural network (backpropagation algorithm)

Please help me to understand unit thing in neuron networks. From the book I understood that a unit in input layer represents an attribute of training tuple. However, it is left unclear, how exactly it does.
Here is the diagram:
There are two "thinking paths" about the input units. The first it could could be that X1 stands for attr1, X2 stands for attr2... Otherwise, it could be that X1, X2, and X3 represent attr1, but X1 stands for Value.VALUE_ONE, ... , X3 stands for Value.VALUE_THREE. So in least case, if attr1 = Value.VALUE_TWO then it weighted and fed simultaneously to a second layer.
public class Tuple
{
private Value attr1
private Value attr2
private Value attr3
}
public enum Value
{
VALUE_ONE,
VALUE_TWO,
VALUE_THREE
}
The second question is about hidden layer units. How it is decided how much units it shall be in hidden layer, and what they represent in the model?
The "units" are just floating point values.
All computations happening there are vector multiplications, and thus can be parallelized well using matrix multiplications and GPU hardware.
The general computation looks like this:
double v phi(double[] x, double[] w, double theta) {
double sum = theta;
for(int i = 0; i < x.length; i++)
sum += x[i] * w[i];
return tanh(sum);
}
except that you don't want to do this in Java code yourself. You want to do this on a GPU in a parallelized way, because this will be 100x faster.

Method overload with unit of measure

I'm noobing around with F# trying to create overloads of cos that accepts angles with units.
This is my code:
[<Measure>] type rad
[<Measure>] type deg
let toRad(x:float<deg>) =
(float x) * 3.14159265 / 180.0
|> LanguagePrimitives.FloatWithMeasure<rad>
let cos (angle: float<rad>) = cos(float angle)
let cos (angle: float<deg>) = cos(toRad angle) // get duplicate definition of cos here
Compiler complains about duplicate definition of cos on the last row.
Measure types are erased (see the specification), so you effectively have two definitions of cos(angle: float) which causes the error.
You could create a union type for the two possibilities
type Angle = Degrees of float | Radians of float
or give the functions different names.

How do I use "i" as an imaginary number in an equation in Objective-C [duplicate]

This question already has answers here:
Are "Complex Numbers" already defined in Objective-C?
(4 answers)
Closed 9 years ago.
Lets say I want to calculate Vout where Vout = (Vin/((I*w*R*C)+1));
where "i" should be defined as the sqrt(-1) . How do I do that in Objective-C?
You can #include <complex.h> then use either _Complex_I or I macros (note the uppercase). The type of variables that contains complex values are denoted with the _Complex attribute, which can also be written simply complex.
double complex c1 = 1.0 + 2.0 * I; // 1+2i
double complex c2 = 2.0 + 3.0 * I; // 2+3i
double complex sum = c1 + c2;
double complex mul = c1 * c2;
You can then use the creal and cimag functions to get real and imaginary parts of a complex.
So in your case:
double Vin = 20; // in Volts
double w = 60; // frequency of your sinusoidal source (60Hz)
double R = 50; // 50 ohms
double C = 20e-6; // 20 µF
double complex invertZc = I*w*C; // Zc = 1/jwC, invertZc = jwC
double complex Vout = Vin / (1. + R*invertZc); // Vout = Vin * 1.0 / (1+Zr/Zc)
Note that all this is provided by the GNU C language (see here), and are not specifically part of Objective-C itself but come from GNU C (Objective-C being a superset of C) and its extensions (which are supported by the GCC and LLVM compilers used by Xcode)
Objective C does not have any built-in facility for handling complex numbers. You have to do the calculations yourself. Create a typedef of a struct imaginaryNumber that contains a real and imaginary part. Write functions that add and multiply those structures and return another one as a result. (To multiply 2 complex numbers (a1+b1i)•(a2+b2i) you treat the numbers as polynomials and use the FOIL method to calculate the result. The product of the 2 imaginary terms becomes a real number because i • i = -1.

Get a fraction from its decimal number

I am developing a program that solves a system of equations. When it gives me the results, it is like: "x1= 1,36842". I'd like to get the fraction of that "1,36842", so I wrote this code.
procedure TForm1.Button1Click(Sender: TObject);
var numero,s:string;
a,intpart,fracpart,frazfatta:double;
y,i,mcd,x,nume,denomin,R:integer;
begin
a:=StrToFloat(Edit1.Text); //get the value of a
IntPart := Trunc(a); // here I get the numerator and the denominator
FracPart := a-Trunc(a);
Edit2.Text:=FloatToStr(FracPart);
numero:='1';
for i:= 1 to (length(Edit2.Text)-2) do
begin
numero:=numero+'0';
end; //in this loop it creates a string that has many 0 as the length of the denominator
Edit3.text:=FloatToStr(IntPart);
y:=StrToInt(numero);
x:=StrToInt(Edit3.Text);
while y <> 0 do
begin
R:= x mod y;
x:=y;
y:=R;
end;
mcd:=x; //at the end of this loop I have the greatest common divisor
nume:= StrToInt(Edit3.Text) div mcd;
denomin:= StrToInt(numero) div mcd;
Memo1.Lines.Add('fraction: '+IntToStr(nume)+'/'+IntToStr(denomin));
end;
It doesn't work correctly because the fraction that it gives to me is wrong. Could anyone help me please?
Your code cannot work because you are using binary floating point. And binary floating point types cannot represent the decimal numbers that you are trying to represent. Representable binary floating point numbers are of the form s2e where s is the significand and e is the exponent. So, for example, you cannot represent 0.1 as a binary floating point value.
The most obvious solution is to perform the calculation using integer arithmetic. Don't call StrToFloat at all. Don't touch floating point arithmetic. Parse the input string yourself. Locate the decimal point. Use the number of digits that follow to work out the decimal scale. Strip off any leading or trailing zeros. And do the rest using integer arithmetic.
As an example, suppose the input is '2.79'. Convert that, by processing the text, into numerator and denominator variables
Numerator := 279;
Denominator := 100;
Obviously you'd have to code string parsing routines rather than use integer literals, but that is routine.
Finally, complete the problem by finding the gcd of these two integers.
The bottom line is that to represent and operate on decimal data you need a decimal algorithm. And that excludes binary floating point.
I recommend defining a function GreaterCommonDivisor function first (wiki reference)
This is going to be Java/C like code since I'm not familiar with Delphi
let
float x = inputnum // where inputnum is a float
// eg. x = 123.56
Then, multiplying
int n = 1;
while(decimalpart != 0){// or cast int and check if equal-> (int)x == x
x = x * 10;
decimalpart = x % 1;
// or a function getting the decimal part if the cast does work
n *= 10;
}
// running eg. x = 123.56 now x = 12356
// n = 100
Then you should have (float)x/n == inputnum at this point eg. (12356/100 == 123.56)
This mean you have a fraction that may not be simpified at this point. All you do now is implement and use the GCD function
int gcd = GreaterCommonDivisor(x, n);
// GreaterCommonDivisor(12356, 100) returns 4
// therefore for correct implementation gcd = 4
x /= gcd; // 12356 / 4 = 3089
n /= gcd; // 100 / 4 = 25
This should be quick and simple to implement, but:
Major Pitfalls:
Float must be terminating. For example expected value for 0.333333333333333333 won't be rounded to 1/3
Float * n <= max_int_value, otherwise there will be a overflow, there are work around this, but there may be another solutions more fitting to these larger numbers
Continued fractions can be used to find good rational approximations to real numbers. Here's an implementation in JavaScript, I'm sure it's trivial to port to Delphi:
function float2rat(x) {
var tolerance = 1.0E-6;
var h1=1; var h2=0;
var k1=0; var k2=1;
var b = x;
do {
var a = Math.floor(b);
var aux = h1; h1 = a*h1+h2; h2 = aux;
aux = k1; k1 = a*k1+k2; k2 = aux;
b = 1/(b-a);
} while (Math.abs(x-h1/k1) > x*tolerance);
return h1+"/"+k1;
}
For example, 1.36842 is converted into 26/19.
You can find a live demo and more information about this algorithm on my blog.
#Joni
I tried 1/2 and the result was a "division by zero" error;
I correct the loop adding:
if b - a = 0 then BREAK;
To avoid
b:= 1 / (b - a);

Resources