Is it true to say that : there are no fractional power units in F#
In addition to what has already been said, the best resource for information about (not just) F# units of measure is Andrew Kennedy's PhD thesis, who actually designed F# units. He mentions fractional units:
The most important decision is whether or not to allow fractional exponents
of dimensions. The argument against them is philosophical: a quantity with a
dimension such as M1/2 makes no sense physically, and if such a thing arose, it
would suggest revision of the set of base dimensions rather than a re-evaluation of
integral exponents. The argument in favour is pragmatic: sometimes it is easier
to write program code which temporarily creates a value whose dimension has
fractional exponents. In this dissertation the former view prevails, and fractional
exponents are not considered. However, most of the theory would apply just the
same; any potential differences are highlighted as they arise.
I think this is essentially the reason why F# does not have fractional units, because the F# design quite closely follows Andrew Kennedy's work to make sure it is sound.
Update: With F# 4.0, support for fractional exponents has been implemented
Units with fractional exponents are quite common and there is nothing special about them.
Probably everyone in technology has come across a voltage noise density, which is measured per sqrt(Hz).
This makes a lot of sense physically, the noise power is proportional to the bandwidth, and the noise voltage is the sqrt of the power, no strange mathematics here.
To create a new base unit every time one comes across a fractional power exponent is not the right approach.
These units are not SI units and their use breaks library compatibility.
If you define the sqrtHz as a new unit and I define the rootHz, our code can't work together.
Anyway, I would need to introduce quite a big set of base units to have a complete set
Hz^-2, Hz^3, Hz^-5,...
Just to offer rational exponents seems to be the better choice, btw. Boost.units does so.
Absence of literally fractional power units of measure does not anyhow discounts F# unit facility as it allows presenting seemingly fractional exponent unit relationships the other way around having smallest fraction as a base dimension:
let takeSqrt (x: float<_>) = sqrt(x)
has inferred signature of float<'u ^ 2> -> float<'u> this way avoiding introduction of imaginary "naturally fractional" float<'u> -> float<'u^1/2>.
let moreComplicated (x: float<_>) (y: float<_>) =
sqrt(x*x + y*y*y)
has inferred signature of float<'u ^ 3> -> float<'u ^ 2> -> float<'u ^ 3>, where all unit measure conversions stay valid relative to some derived implicit base dimension float<'u>.
The fact that the piece of code below
[<Measure>]type m
let c = sqrt(1.0<m>)
does not even compile with diagnostics The unit of measure 'm' does not match the unit of measure ''u ^ 2' can be considered as a blame, or a blessing, but is a clear indication that the unit measure checks are in place.
EDIT: After reading OP's comment and the except from Andrew Kennedy's paper, it appears #nicolas is correct -- F# doesn't support units of measure with fractional exponents.
Shouldn't the answer be as easy as saying, yes, hertz are measured in s^-2 which is the same as s^(1/2)? There ya' go.
Also, I like the philosophical idea of using , say m^(1/2) if it came up in calculations and perhaps one day understanding what that unit means in the literal sense.
Related
This may show my naiveté but it is my understanding that quantum computing's obstacle is stabilizing the qbits. I also understand that standard computers use binary (on/off); but it seems like it may be easier with today's tech to read electric states between 0 and 9. Binary was the answer because it was very hard to read the varying amounts of electricity, components degrade over time, and maybe maintaining a clean electrical "signal" was challenging.
But wouldn't it be easier to try to solve the problem of reading varying levels of electricity so we can go from 2 inputs to 10 and thereby increasing the smallest unit of storage and exponentially increasing the number of paths through the logic gates?
I know I am missing quite a bit (sorry the puns were painful) so I would love to hear why or why not.
Thank you
"Exponentially increasing the number of paths through the logic gates" is exactly the problem. More possible states for each n-ary digit means more transistors, larger gates and more complex CPUs. That's not to say no one is working on ternary and similar systems, but the reason binary is ubiquitous is its simplicity. For storage, more possible states also means we need more sensitive electronics for reading and writing, and a much higher error frequency during these operations. There's a lot of hype around using DNA (base-4) for storage, but this is more on account of the density and durability of the substrate.
You're correct, though that your question is missing quite a bit - qubits are entirely different from classical information, whether we use bits or digits. Classical bits and trits respectively correspond to vectors like
Binary: |0> = [1,0]; |1> = [0,1];
Ternary: |0> = [1,0,0]; |1> = [0,1,0]; |2> = [0,0,1];
A qubit, on the other hand, can be a linear combination of classical states
Qubit: |Ψ> = α |0> + β |1>
where α and β are arbitrary complex numbers such that such that |α|2 + |β|2 = 1.
This is called a superposition, meaning even a single qubit can be in one of an infinite number of states. Moreover, unless you prepared the qubit yourself or received some classical information about α and β, there is no way to determine the values of α and β. If you want to extract information from the qubit you must perform a measurement, which collapses the superposition and returns |0> with probability |α|2 and |1> with probability |β|2.
We can extend the idea to qutrits (though, just like trits, these are even more difficult to effectively realize than qubits):
Qutrit: |Ψ> = α |0> + β |1> + γ |2>
These requirements mean that qubits are much more difficult to realize than classical bits of any base.
The Scharr-Filter is explained in Scharrs dissertation. However the values given on page 155 (167 in the pdf) are [47 162 47] / 256. Multiplying this with the derivation-filter would yield:
Yet all other references I found use
Which is roughly the same as the ones given by Scharr, scaled by a factor of 32.
Now my guess is that the range can be represented better, but I'm curious if there is an official explanation somewhere.
To get the ball rolling on this question in case no "expert" can be found...
I believe the values [3, 10, 3] ... instead of [47 162 47] / 256 ... are used simply for speed. Recall that this method is competing against the Sobel Operator whose coefficient values are are 0, and positive/negative 1's and 2's.
Even though the divisor in the division, 256 or 512, is a power of 2 and can can be performed by a shift, doing that and multiplying by 47 or 162 is going to take more time. A multiplication by 3 however can in fact be done on some RISC architectures like the IBM POWER series in a single shift-and-add operation. That is 3x = (x << 1) + x. (On these architectures, the shifter and adder are separate units and can be done independently).
I don't find it surprising that Phd paper used the more complicated and probably more precise formula; it needed to prove or demonstrate something, and the author probably wasn't totally certain or concerned that it be used and implemented alongside other methods. The purpose in the thesis was probably to have "perfect rotational symmetry". Afterwards when one decides to implement it, that person I suspect used the approximation formula and gave up a little on perfect rotational symmetry, to gain speed. That person's goal as I said was to have something that was competitive at the expense of little bit of speed for this rotational stuff.
Since I'm guessing you are willing to do work this as it is your thesis, my suggestion is to implement the original algorithm and benchmark it against both the OpenCV Scharr and Sobel code.
The other thing to try to get an "official" answer is: "Use the 'source', Luke!". The code is on github so check it out and see who added the Scharr filter there and contact that person. I won't put the person's name here, but I will say that the code was added 2010-05-11.
I have a program that solves equations and sometimes the solutions x1 and x2 are numbers with a lot of decimal numbers. For example when Δ = 201 (Δ = discriminant) the square root gives me a floating point number.
I need a good approximation of that number because I also have a function that converts it into a fraction. So I thought to do this:
Result := FormatFloat('0.#####', StrToFloat(solx1));
The solx1 is a double. In this way, the number '456,9067896' becomes '456,90679'.
My question is this: if I approximate in this way, the fraction of 456,9067896 will be correct (and the same) if I have 456,90679?
the fraction of 456,9067896 will be correct (and the same) if I have 456,90679?
No, because 0.9067896 is unequal to 0.90679.
But why do you want to round the numbers? Just let them be as they are. Shorten them only for visual representation.
If you are worried about complete correctness of the result, you should not use floating point numbers at all, because floating points are, by definition, a rounding of real numbers. Only the first 5-6 decimal digits of a 32-bit floating point are generally reliable, the following ones are unreliable, due to machine error.
If you want complete precision, you should be using symbolic maths (rational numbers and symbolic representation for irrational/imaginary numbers).
To compare two floating point values with a given precision, just use the SameValue() function from Math unit or its sibbling CompareValue().
if SameValue(456.9067896, 456.90679, 1E-5) then ...
You can specify the precision on which the comparision will take place.
Or you can use a currency value, which has fixed arithmetic precision of 4 digits. So, it won't have rounding issue any more. But you can not do all mathematic computation with it (huge or tiny numbers are not handled properly): its main use is for accounting computations.
You should better never use string representations to compare floats, since it may be very confusing, and do not have good rounding abilities.
If I have variables a, b, an c of type double, let c:=a/b, and give a and b values of 7 and 10, then c's value of 0.7 registers as being LESS THAN 0.70.
On the other hand, if the variables are all type extended, then c's value of 0.7 does not register as being less than 0.70.
This seems strange. What information am I missing?
First, it needs to be noted that float literals in Delphi are of the Extended type. So when you compare a double to a literal, the double is probably first "expanded" to Extended, and then compared. (Edit : This is true only in 32 bits application. In 64 bits application, Extended is an alias of Double)
Here, all ShowMessage will be displayed.
procedure DoSomething;
var
A, B : Double;
begin
A := 7/10;
B := 0.7; //Here, we lower the precision of "0.7" to double
//Here, A is expanded to Extended... But it has already lost precision. This is (kind of) similar to doing Round(0.7) <> 0.7
if A <> 0.7 then
ShowMessage('Weird');
if A = B then //Here it would work correctly.
ShowMessage('Ok...');
//Still... the best way to go...
if SameValue(A, 0.7, 0.0001) then
ShowMessage('That will never fails you');
end;
Here some literature for you
What Every Computer Scientist Should Know About Floating-Point Arithmetic
There is no representation for the mathematical number 0.7 in binary floating-point. Your statement computes in c the closest double, which (according to what you say, I didn't check) is a little below 0.7.
Apparently in extended precision the closest floating-point number to 0.7 is a little above it. But there still is no exact representation for 0.7. There isn't any at any precision in binary floating-point.
As a rule of thumb, any non-integer number whose last non-zero decimal is not 5 cannot be represented exactly as a binary floating-point number (the converse is not true: 0.05 cannot be represented exactly either).
It has to do with the number of digits of precision in the two different floating point types you're using, and the fact that a lot of numbers cannot be represented exactly, regardless of precision. (From the pure math side: irrational numbers outnumber rationals)
Take 2/3, for example. It' can't be represented exactly in decimal. With 4 significant digits, it would be represented as 0.6667. With 8 significant digits, it would be 0.66666667.
The trailing 7 is roundup reflecting that the next digit would be > 5 if there was room to keep it.
0.6667 is greater than 0.66666667, so the computer will evaluate 2/3 (4 digits) > 2/3 (8 digits).
The same is true with your .7 vs .70 in double and extended vars.
To avoid this specific issue, try to use the same numeric type throughout your code. When working with floating point numbers in general, there are a lot of little things you have to watch out for. The biggest is to not write your code to compare two floats for equality - even if they should be the same value, there are many factors in calculations that can make them end up a very tiny bit different. Instead of comparing for equality, you need to test that the difference between the two numbers is very small. How small the difference has to be is up to you and to the nature of your calculations, and it usually referred to as epsilon, taken from calculus theorem and proof.
You're missing This Thing.
See especially the 'Accuracy problems' chapter. See also the Pascal's answer.
In order to fix your code without using the Extended type, you must add the Math unit and use the SameValue function from there which is especially built for this purpose.
Be sure to use an Epsilon value different than 0 when you use the SameValue in your case.
For example:
var
a, b, c: double;
begin
a:=7; b:=10;
c:=a/b;
if SameValue(c, 0.70, 0.001) then
ShowMessage('Ok')
else
ShowMessage('Wrong!');
end;
HTH
In "F# for Scientists" Jon Harrop says:
Roughly speaking, values of type int approximate real
numbers between min-int and max-int with a constant absolute error of +- 1/2
whereas values of the type float have an approximately-constant relative error that
is a tiny fraction of a percent.
Now, what does it mean? Int type is inaccurate?
Why C# for (1 - 0.9) returns 0.1 but F# returns 0.099999999999978 ? Is C# more accurate and suitable for scientific calculations?
Should we use decimal values instead of double/float for scientific calculations?
For an arbitrary real number, either an integral type or a floating point type is only going to provide an approximation. The integral approximation will never be off by more than 0.5 in one direction or the other (assuming that the real number fits within the range of that integral type). The floating point approximation will never be off by more than a small percentage (again, assuming that the real is within the range of values supported by that floating point type). This means that for smaller values, floating point types will provide closer approximations (e.g. storing an approximation to PI in a float is going to be much more accurate than the int approximation 3). However, for very large values, the integral type's approximation will actually be better than the floating point type's (e.g. consider the value 9223372036854775806.7, which is only off by 0.3 when represented as 9223372036854775807 as a long, but which is represented by 9223372036854780000.000000 when stored as a float).
This is just an artifact of how you're printing the values out. 9/10 and 1/10 cannot be exactly represented as floating point values (because the denominator isn't a power of two), just as 1/3 can't be exactly written as a decimal (you get 0.333... where the 3's repeat forever). Regardless of the .NET language you use, the internal representation of this value is going to be the same, but different ways of printing the value may display it differently. Note that if you evaluate 1.0 - 0.9 in FSI, the result is displayed as 0.1 (at least on my computer).
What type you use in scientific calculations will depend on exactly what you're trying to achieve. Your answer is generally only going to be approximately accurate. How accurate do you need it to be? What are your performance requirements? I believe that the decimal type is actually a fixed point number, which may make it inappropriate for calculations involving very small or very large values. Note also that F# includes arbitrary precision rational numbers (with the BigNum type), which may also be appropriate depending on your input.
No, F# and C# uses the same double type. Floating point is almost always inexact. Integers are exact though.
UPDATE:
The reason why you are seeing a difference is due to the printing of the number, not the actual representation.
For the first point, I'd say it says that int can be used to represent any real number in the intger's range, with a constant maximum error in [-0,5, 0.5]. This makes sense. For instance, pi could be represented by the integer value 3, with an error smaller than 0.15.
Floating point numbers don't share this property; their maximum absolute error is not independent of the value you're trying to represent.
3 - This depends on calculations: sometimes float is a good choice, sometimes you can use int. But there are tasks when you lack of precision for any of float and decimal.
The reason against using int:
> 1/2;;
val it : int = 0
The reason against using float (also known as double in C#):
> (1E-10 + 1E+10) - 1E+10;;
val it : float = 0.0
The reason against BCL decimal:
> decimal 1E-100;;
val it : decimal = 0M
Every listed type has it's own drawbacks.