If I have the next type:
type Color(r: float, g: float, b:float) =
member this.r = r
member this.g = g
member this.b = b
static member ( * ) (c1:Color, c2:Color) =
Color (c1.r*c2.r, c1.g*c2.g, c1.b*c2.b)
static member Zero = Color(0.0,0.0,0.0)
and I do:
let ca = Color(1.,1.,1.)
let cb = Color(1.,1.,1.)
ca = cb
I should obtain true, but the F# interactive via a script is giving me false
Instead, If I define as:
let ca = Color(1.,1.,1.)
let cb = ca
ca = cb
It returns true
Am I doing something wrong trying to compare two values of a defined type in this way?
How can I do it to obtain true as a result?
Thanks
The OP definition of Color is a class. Classes have referential equality by default, just like in C#. That means they're only equal if they literally are the same object (points to the same memory address).
Only the functional data types in F# have structural equality. These include records, discriminated unions, lists, and a few other types.
It'd be more idiomatic to define Color as a record:
type Color = { Red : float; Green : float; Blue : float }
This type has structural equality built in:
> let ca = { Red = 1.; Green = 1.; Blue = 1. };;
val ca : Color = {Red = 1.0;
Green = 1.0;
Blue = 1.0;}
> let cb = { Red = 1.; Green = 1.; Blue = 1. };;
val cb : Color = {Red = 1.0;
Green = 1.0;
Blue = 1.0;}
> ca = cb;;
val it : bool = true
If you want to define multiplication and zero for the type, you can do that as well:
let (*) x y = {
Red = x.Red * y.Red
Green = x.Green * y.Green
Blue = x.Blue * y.Blue }
let zero = { Red = 0.0; Green = 0.0; Blue = 0.0 }
This enables you to write, e.g.:
> let product = ca * cb;;
val product : Color = {Red = 1.0;
Green = 1.0;
Blue = 1.0;}
F# implements automatic memberwise comparison for records and unions, but not for classes. If you want to have it and construct values with the Color(r, g, b) syntax, you can use a single-case union. You will get pattern matching as a bonus (see my implementation of (*)).
type Color =
| Color of r: float * g: float * b: float
member this.r = let (Color(r, _, _)) = this in r
member this.g = let (Color(_, g, _)) = this in g
member this.b = let (Color(_, _, b)) = this in b
static member (*) (Color(r1, g1, b1), Color(r2, g2, b2)) =
Color(r1 * r2, g1 * g2, b1 * b2)
static member Zero = Color(0., 0., 0.)
To start you should read this page:
http://blogs.msdn.com/b/dsyme/archive/2009/11/08/equality-and-comparison-constraints-in-f-1-9-7.aspx
It does a great job of illustrating how equality works in F#.
As to your specific issue, you are looking at the difference between Reference Equality and Structural Equality. You can add the following annotation
[<CustomEquality; CustomComparison>]
And you can add overloads to the Equals method override x.Equals(other) to do your memberwise comparision
Related
EDIT: Resolved, I answered the question below.
I am using the following to get metadata for PHAssets:
let data = NSData.init(contentsOf: url!)!
if let imageSource = CGImageSourceCreateWithData(data, nil) {
let metadata = CGImageSourceCopyPropertiesAtIndex(imageSource, 0, nil)! as NSDictionary
}
The metadata dictionary has all the values I am looking for. However a few fields like ShutterSpeedValue, ExposureTime which have fractions get printed as decimals:
ExposureTime = "0.05"
ShutterSpeedValue = "4.321956769055745"
When I look at this data on my Mac's preview app and exiftool, it shows:
ExposureTime = 1/20
ShutterSpeedValue = 1/20
How can I get the correct fraction string instead of the decimal string?
EDIT: I tried simply converting the decimal to a fraction string using this from SO code but this isn't correct:
func rationalApproximation(of x0 : Double, withPrecision eps : Double = 1.0E-6) -> String {
var x = x0
var a = x.rounded(.down)
var (h1, k1, h, k) = (1, 0, Int(a), 1)
while x - a > eps * Double(k) * Double(k) {
x = 1.0/(x - a)
a = x.rounded(.down)
(h1, k1, h, k) = (h, k, h1 + Int(a) * h, k1 + Int(a) * k)
}
return "\(h)/\(k)"
}
As you notice, the decimal value of ShutterSpeedValue printed as 4.321956769055745 isn't even equal to 1/20.
Resolved.
As per
https://www.dpreview.com/forums/post/54376235
ShutterSpeedValue is defined as APEX value, where:
ShutterSpeed = -log2(ExposureTime)
So -log2(1/20) is 4.3219, just as what I observed.
So to get the ShutterSpeedValue, I use the following:
"1/\(ceil(pow(2, Double(4.321956769055745))))"
I tested 3 different photos and 1/20, 1/15 and 1/1919 were all correctly calculated using your formula.
for(int i = 1; i < 10; i++) {
var shape = new Shape();
shape.graphics.beginPath();
shape.graphics.moveTo(r1.nextInt(1500), r2.nextInt(1500));
shape.graphics.lineTo(r3.nextInt(1500), r4.nextInt(1500));
shape.graphics.strokeColor(Color.Green);
shape.graphics.closePath();
stage.addChild(shape);
}
How can i randomly change the color of the line?
In StageXL, colors are actually just integers. So, as #bp74 says, you can do:
var a = 255; // Assuming you want full opacity every time.
var r = random.nextInt(256); // red = 0..255
var g = random.nextInt(256); // green = 0..255
var b = random.nextInt(256); // blue = 0..255
var color = (a << 24) + (r << 16) + (g << 8) + b;
shape.graphics.strokeColor(color);
This is assuming you have random defined somewhere above:
var random = new Random();
Note that you most probably don't need several Random instances (like you have with r1, r2, r3 and r4, I assume).
The color in StageXL (and Flash) is a 32 ARGB value. You use 4x8 bit for the alpha, red, green and blue color channel. The following example shows the color 0xFFFFFFFF which is white:
var a = 255; // alpha = 0..255
var r = 255; // red = 0..255
var g = 255; // green = 0..255
var b = 255; // blue = 0..255
var color = (a << 24) + (r << 16) + (g << 8) + b;
Newbie question:
void main () {
int A = 1;
int B = 2;
double C = A / B;
stdout.printf("C value is: %g\n", C);
}
This prints: "C value is: 0"
void main () {
int A = 1;
double B = 2;
double C = A / B;
stdout.printf("C value is: %g\n", C);
}
This prints: "C value is: 0.5"
I don't understand the reason why the result is not 0.5 in both cases.
The division operation is performed on two integers, so the result is an integer. The fact that you assign it to a double afterwards doesn't change that.
What you're doing in your question, with the implicit conversions made explicit, is
int A = 1;
int B = 2;
double C = (double) (A / B);
However, if you want to perform the division operation using doubles you have to explicitly cast at least one of the operands to double:
int A = 1;
int B = 2;
double C = ((double) A) / B;
For the rules concerning arithmetic operations, see the arithmetic expressions section of the Vala Manaual. The relevant bit:
If both operands are of integer types, then the result will be the quotient only of the calculation (equivalent to the precise answer rounded down to an integer value.) If either operand is of a floating point type, then the result will be as precise as possible within the boundaries of the result type (which is worked out from the basic arithmetic type rules.)
I'm trying to write a function in Scilab to display images.
I'm dealing with images as lists of matrices, and then converting the list to a hypermatrix inside the function. However, the code does not seem to work for images of the type uint16 or uint32.
This is the code I've been working on:
imshow()
function[] =imshow(Image)
global TYPE_DOUBLE; //retrieving list and creating 3 dimensional matrix out of it
dimensions=size(Image)
MaxUInt8 = 2 ^ 8 - 1;
MaxGrayValue = MaxUInt8; //changed from MaximumGrayValue
if dimensions==3 then
matSize=size(Image(1));
r=matrix(Image(1),matSize(1),matSize(2));
g=matrix(Image(2),matSize(1),matSize(2));
b=matrix(Image(3),matSize(1),matSize(2));
z(:,:,1)=uint8(r); //Since Matplot is not working with uint16 and uint32, convert every image to a
z(:,:,2)=uint8(g); //8 bit palette.
z(:,:,3)=uint8(b); //Note: this will affect the color depth.
[NumberOfRows NumberOfColumns NumberOfChannels] = size(z);
NumberOfPixels = NumberOfRows * NumberOfColumns;
Sample = z(1);
//printf("\nType of Sample: ");
//printf(typeof(Sample)); //DEBUG:
//printf("\n");
if type(Sample) == 1 then //type 1 = real/complex matrix of double
ColorMap = matrix(z, NumberOfPixels, NumberOfChannels);
disp(ColorMap);
else
TypeName = typeof(Sample)
select TypeName
case 'uint8'
MaxGrayValue = 2 ^ 8 - 1;
//printf("uint8\t%d", MaxGrayValue); //DEBUG:
case 'uint16'
MaxGrayValue = 2 ^ 16 - 1;
//ColorMap = double(matrix(z, NumberOfPixels, NumberOfChannels)) / MaxGrayValue;
//printf("uint16\t%d", MaxGrayValue); //DEBUG:
case 'uint32'
MaxGrayValue = 2 ^ 32 - 1;
//ColorMap = double(matrix(z, NumberOfPixels, NumberOfChannels)) / MaxGrayValue;
//printf("uint32\t%d", MaxGrayValue); //DEBUG:
end;
ColorMap = double(matrix(z, NumberOfPixels, NumberOfChannels)) / MaxGrayValue;
printf("\nCreated colormap with MaxGrayValue = %d\n", MaxGrayValue); //DEBUG:
end;
Img=z;
//Grayscale
elseif dimensions==1 then
matSize = size(Image(1));
Img=matrix(Image(1),matSize(1),matSize(2));
Img=Img';
select typeof(Img)
case 'uint8'
MaxGrayValue = MaxUInt8;
case 'uint16'
MaxGrayValue = max(Image(:)) ;
case 'uint32'
MaxGrayValue = max(Image(:));
end;
ColorMap = graycolormap(double(MaxGrayValue + 1)); //changed from MaximumGrayValue
end;
show(Img,ColorMap);
endfunction
show()
function[] =show(Img,ColorMap)
FigureHandle = gcf();
drawlater();
FigureHandle.color_map = ColorMap
FigureHandle.background = -2; // sets the background to white
FigureHandle.figure_name = "Title";
[NumberOfRows NumberOfColumns] = size(Img);
FigureHandle.axes_size = [NumberOfColumns NumberOfRows];
delete(gca()); // previous image is deleted
Diagram = gca();
[NumberOfRows NumberOfColumns] = size(Img);
Diagram.data_bounds = [1, 1; NumberOfColumns, NumberOfRows];
Diagram.axes_visible = ['off' 'off' 'off'];
Diagram.isoview = 'on';
Options = '082'; // Box is drawn around image.
//printf("\nGiven to Matplot: "); //DEBUG:
//printf(typeof(Img)); //DEBUG:
Matplot(Img, Options);
drawnow();
endfunction
The error I'm getting is:
!--error 202
Matplot: Wrong type for input argument #1: A real or integer expected.
at line 22 of function show called by :
at line 67 of function imshow called by :
imshow(a);
Any help would be great.
It seems that OpenCV and matplotlib all cannot support imshow of uint16 or uint32, so the images will be converted to uint8 when shown.
I am looking to convert the following C code into F# (this is the fast inverse square root algorithm):
float Q_rsqrt( float number )
{
long i;
float x2, y;
x2 = number * 0.5F;
y = number;
i = * ( long * ) &y; // Extract bit pattern
i = 0x5f3759df - ( i >> 1 );
y = * ( float * ) &i; // Convert back to float.
y = y * ( 1.5F - ( x2 * y * y ) );
return y;
}
First of all you should do some research. Then if you stuck specify with what you have problem.
Here is solution by Kit Eason.
let fastInvSqrt (n : float32) : float32 =
let MAGIC_NUMBER : int32 = 0x5f3759df
let THREE_HALVES = 1.5f
let x2 = n * 0.5f
let i = MAGIC_NUMBER - (System.BitConverter.ToInt32(System.BitConverter.GetBytes(n), 0) >>> 1)
let y = System.BitConverter.ToSingle(System.BitConverter.GetBytes(i), 0)
y * (THREE_HALVES - (x2 * y * y))
// Examples:
let x = fastInvSqrt 4.0f
// Output: val x : float32 = 0.499153584f
let x' = 1. / sqrt(4.0)
// Output: val x' : float = 0.5
When it comes to performance and low-level optimization it is often a good idea to measure before and after. The fast-inverse trick is very cool but it's approximates the inverse square and the question is if tricky code like this is truely necessary these days (in the DOOM days when float performace was crap the trick was amazing).
Anyway so I built a simple performance test bench in order to compare the trivial implementation with the solution provided by Kit Eason/lad2025 and another one that doesn't allocate byte arrays.
open System
open System.Diagnostics
open System.Runtime.InteropServices
[<Literal>]
let MAGIC_NUMBER : int32 = 0x5f3759df
[<Literal>]
let THREE_HALVES = 1.5F
[<Literal>]
let HALF = 0.5F
[<Literal>]
let OUTER = 1000
[<Literal>]
let INNER = 10000
let inline invSqr (x : float32) : float32 = 1.F / sqrt x
let fInvSqr (x : float32) : float32 =
let x2 = x * 0.5f
// Allocates two byte arrays creating GC pressure ==> hurts performance
let i = MAGIC_NUMBER - (BitConverter.ToInt32(BitConverter.GetBytes(x), 0) >>> 1)
let y = BitConverter.ToSingle(BitConverter.GetBytes(i), 0)
y * (THREE_HALVES - (x2 * y * y))
// Susceptible to race conditions & endianess issues
[<StructLayout (LayoutKind.Explicit)>]
type Bits =
struct
[<FieldOffset(0)>]
val mutable f: float32
[<FieldOffset(0)>]
val mutable i: int32
end
let mutable bits = Bits ()
let fInvSqr2 (x : float32) : float32 =
let x2 = x * 0.5F
bits.f <- x
let i = MAGIC_NUMBER - (bits.i >>> 1)
bits.i <- i
let y = bits.f
y * (THREE_HALVES - (x2 * y * y))
let timeIt n (a : unit -> 'T) : int64 * 'T =
let r = a ()
let sw = Stopwatch ()
sw.Start ()
for i = 1 to n do
ignore <| a ()
sw.Stop ()
sw.ElapsedMilliseconds, r
[<EntryPoint>]
let main argv =
let testCases =
[|
"invSqr" , fun () ->
let mutable sum = 0.F
for x = 1 to INNER do
sum <- sum + invSqr (float32 x)
sum
"fInvSqr" , fun () ->
let mutable sum = 0.F
for x = 1 to INNER do
sum <- sum + fInvSqr (float32 x)
sum
"fInvSqr2" , fun () ->
let mutable sum = 0.F
for x = 1 to INNER do
sum <- sum + fInvSqr2 (float32 x)
sum
|]
for name, action in testCases do
printfn "Running %s %d times..." name (OUTER*INNER)
let elapsed, result = timeIt OUTER action
printfn "... it took %d ms product result: %f" elapsed result
0
The performance test result on my machine:
Running invSqr 10000000 times...
... it took 78 ms product result: 198.544600
Running fInvSqr 10000000 times...
... it took 311 ms product result: 198.358200
Running fInvSqr2 10000000 times...
... it took 49 ms product result: 198.358200
Press any key to continue . . .
So we see that fInvSqr is actually 3 times slower than the trivial solution, most likely because of the byte allocation. In addition the cost of GC is hidden in these numbers and might add non-deterministic performance degration.
fInvSqr2 seems to perform slightly better but there are drawbacks here as well
The result is off by 0.1%
The Bits trick is susceptible to race conditions (fixable)
The Bits trick is suspectible to endian issues (if you are run the program on a CPU with different endianess it might break)
Is the performance gains worth the drawbacks? Since a program probably is not just built up from inverse square operations the effective performance gain might be much smaller in reality. I have a hard time imagining a scenario where I would so presurres for performance I opt for the fast inverse trick today but then it all depends on your context.