FsCheck test change the range of values used for testing - f#

My code is automatically testing for values from -99 to 99 while using FsCheck.
Check.Quick test
where my test function takes integer values.
I would like to test using values from 1 to 4999.

You can use Gen.elements combined with Prop.forAll:
let n = Gen.elements [-99..99] |> Arb.fromGen
let prop = Prop.forAll n (fun number ->
// Test goes here - e.g.:
Assert.InRange(number, -99, 99))
prop.QuickCheck()
Gen.elements takes a sequence of valid values and creates a uniform generator from that sequence. Prop.forAll defines a property with that custom generator.
You can combine it with FsCheck's Glue Library for xUnit.net, which is my preferred method:
[<Property>]
let ``Number is between -99 and 99`` () =
let n = Gen.elements [-99..99] |> Arb.fromGen
Prop.forAll n (fun number ->
// Test goes here - e.g.:
Assert.InRange(number, -99, 99))

By default, FsCheck generates integers between 1 and 100. You can change this by supplying a Config object to the Check.
let config = {
Config.Quick with
EndSize = 4999
}
Check.One(config,test)
EndSize indicates the size to use for the last test, when all the tests are passing. The size increases linearly between StartSize, which you can also set should you wish to generate test data in a range starting from some value other than 1, and EndSize. See the implementation of type Config in https://github.com/fscheck/FsCheck/blob/master/src/FsCheck/Runner.fs

Related

Create Multi-Parameter Pipeable Function F#

I want to generalize my standard deviation function to allow for calculations of multiples of deviations, but still use it in the context of piping. It appears that I am setting up my function incorrectly.
let variance (x:seq<float>) =
let mean = x |> Seq.average
x |> Seq.map(fun x -> (x - mean) ** 2.0)
|> Seq.average
let stdDeviation (deviations:float, x:seq<float>) =
sqrt (x |> variance) * deviations
Example usage would be
let sTester = seq{1.0 .. 20.0}
let stdDev = sTester |> stdDeviation 1.0
I keep getting the error: The expression was expecting to have the type: seq -> a' but here has type float
Help is greatly appreciated.
Thanks,
~David
If you change your stdDeviation so that it takes two parameters, rather than a tuple then it works:
let stdDeviation (deviations:float) (x:seq<float>) =
sqrt (x |> variance) * deviations
let stdDev = sTester |> stdDeviation 1.0
The idea is that when you write let stdDeviation (deviations, x:seq<float>) then you are defining a function that takes a single parameter that is actually a tuple.
The way the |> operator works is that it supplies one parameter to the function on the right. So if you have just one parameter (which is a tuple), then the pipe isn't all that useful.
But if you say let stdDeviation deviations (x:seq<float>) then you are defining a function with two parameters. When you write input |> stdDeviations 1.0 you are then providing the first parameter on the right hand side and the input (second parameter) on the left via the pipe.

How extract the int from a FsCheck.Gen.choose

I'm new on F#, and can't see how extract the int value from:
let autoInc = FsCheck.Gen.choose(1,999)
The compiler say the type is Gen<int>, but can't get the int from it!. I need to convert it to decimal, and both types are not compatible.
From a consumer's point of view, you can use the Gen.sample combinator which, given a generator (e.g. Gen.choose), gives you back some example values.
The signature of Gen.sample is:
val sample : size:int -> n:int -> gn:Gen<'a> -> 'a list
(* `size` is the size of generated test data
`n` is the number of samples to be returned
`gn` is the generator (e.g. `Gen.choose` in this case) *)
You can ignore size because Gen.choose ignores it, as its distribution is uniform, and do something like:
let result = Gen.choose(1,999) |> Gen.sample 0 1 |> Seq.exactlyOne |> decimal
(* 0 is the `size` (gets ignored by Gen.choose)
1 is the number of samples to be returned *)
The result should be a value in the closed interval [1, 999], e.g. 897.
Hi to add to what Nikos already told you, this is how you can get an decimal between 1 and 999:
#r "FsCheck.dll"
open FsCheck
let decimalBetween1and999 : Gen<decimal> =
Arb.generate |> Gen.suchThat (fun d -> d >= 1.0m && d <= 999.0m)
let sample () =
decimalBetween1and999
|> Gen.sample 0 1
|> List.head
you can now just use sample () to get a random decimal back.
In case you just want integers between 1 and 999 but have those converted to decimal you can just do:
let decimalIntBetween1and999 : Gen<decimal> =
Gen.choose (1,999)
|> Gen.map decimal
let sampleInt () =
decimalIntBetween1and999
|> Gen.sample 0 1
|> List.head
what you probably really want to do instead
Is use this to write you some nice types and check properties like this (here using Xunit as a test-framework and the FsCheck.Xunit package:
open FsCheck
open FsCheck.Xunit
type DecTo999 = DecTo999 of decimal
type Generators =
static member DecTo999 =
{ new Arbitrary<DecTo999>() with
override __.Generator =
Arb.generate
|> Gen.suchThat (fun d -> d >= 1.0m && d <= 999.0m)
|> Gen.map DecTo999
}
[<Arbitrary(typeof<Generators>)>]
module Tests =
type Marker = class end
[<Property>]
let ``example property`` (DecTo999 d) =
d > 1.0m
Gen<'a> is a type that essentially abstracts a function int -> 'a (the actual type is a bit more complex, but let's ignore for now). This function is pure, i.e. when given the same int, you'll get the same instance of 'a back every time. The idea is that FsCheck generates a bunch of random ints, feeds them to the Gen function, out come random instances of the type 'a you're interested in, and feeds those to a test.
So you can't really get out the int. You have in your hands a function that given an int, generates another int.
Gen.sample as described in another answer essentially just feeds a sequence of random ints to the function and applies it to each, returning the results.
The fact that this function is pure is important because it guarantees reproducibility: if FsCheck finds a value for which a test fails, you can record the original int that was fed into the Gen function - rerunning the test with that seed is guaranteed to generate the same values, i.e. reproduce the bug.

Does F# iterate through tuples from back to front?

I've created a little tuple of langauges and when using it in the interactive window they are listed in reverse. Is this normal F# bahavior?
let languages = ("English", "Spanish", "Italian")
let x, y, z = languages
val languages : string * string * string = ("English", "Spanish", "Italian")
val z : string = "Italian"
val y : string = "Spanish"
val x : string = "English"
You're creating three variables, at the same time, with independant values. Order is not relevant here. F# interactive could print the values in any order.
What is important is order evaluation in your code, and the spec says it's from left to right when you're calling a function or constructor, creating a record, and so on.
> (printfn "a", printfn "b");;
a
b
That is also how FSI prints tuples when I decompose them them on my machine.
eg:
let x, y = ("a", "b");;
val y : string = "b"
val x : string = "a"
It's a little weird that it prints in "reverse", but I'm not sure that I would call it F# behavior as much as it is FSI behavior or pretty print behavior.
If you're want all the details, you can always have a look at the source code:
http://fsharppowerpack.codeplex.com/
I'm not sure if there is a connection, but wrapping F# expressions in Quotations can give you insight into the semantics of the language, and you can see in the following that the named values are indeed bound in reverse order like shown in FSI:
> <# let languages = ("English", "Spanish", "Italian") in let x, y, z = languages in () #> |> string;;
val it : string =
"Let (languages,
NewTuple (Value ("English"), Value ("Spanish"), Value ("Italian")),
Let (z, TupleGet (languages, 2),
Let (y, TupleGet (languages, 1),
Let (x, TupleGet (languages, 0), Value (<null>)))))"
Note that this is still consistent with #Laurent's answer, which asserts that argument expressions in tuple construction are evaluated from left to right. In the following example, see how the result of the tuple construction is bound to an intermediate named value, which is then deconstructed using side-effects free TupleGet expressions.
> <# let x,y = (stdin.Read(), stdin.ReadLine()) in () #> |> string;;
val it : string =
"Let (patternInput,
NewTuple (Call (Some (Call (None, System.IO.TextReader ConsoleIn[Object](),
[])), Int32 Read(), []),
Call (Some (Call (None, System.IO.TextReader ConsoleIn[Object](),
[])), System.String ReadLine(), [])),
Let (y, TupleGet (patternInput, 1),
Let (x, TupleGet (patternInput, 0), Value (<null>))))"

Type mismatch with Async in F#

I've just started messing around with F# and am trying to do some basic parallel computation to get familiar with the language. I'm having trouble with type mismatches. Here's an example:
let allVariances list =
seq {
for combination in allCombinations list do
yield (combination, abs(targetSum - List.sum combination))
}
let compareVariance tup1 tup2 =
if snd tup1 < snd tup2 then
tup1
else
tup2
let aCompareVariance tup1 tup2 =
async { return compareVariance tup1 tup2 }
let matchSum elements targetSum =
allVariances elements
|> Seq.reduce aCompareVariance
|> Async.Parallel
|> Async.RunSynchronously
So, "allVariances elements" produces a seq<float list * float>. CompareVariance takes two of those <float list * float> tuples and returns the one with the smaller second item (variance). My goal is to use Reduce to end up with the tuple with the smallest variance. However, I get a type mismatch on the aCompareVariance argument:
Error 1 Type mismatch. Expecting a float list * float -> float list * float -> float list * float but given a float list * float -> float list * float -> Async<float list * float> The type 'float list * float' does not match the type 'Async<float list * float>'
It seems like the Async return type isn't accepted by Reduce?
Seq.reduce takes a function and a sequence and reduces the list using the function. That is, the outcome of reducing a sequence {a1;a2;a3;...;an} with function f will be f(f(...f(f(a1,a2),a3),...),an))...). However, the function you're passing can't be applied this way, because the return type (Async<float list * float>) doesn't match the argument types (float list * float). What exactly are you trying to achieve?
Also, keep in mind that async computations are great for asynchronous work, but not ideal for parallel work. See F#: Asynch and Tasks and PLINQ, oh my! and Task Parallel Library vs Async Workflows.
EDIT
Here's one way to write a function which will reduce items more like you expected, operating sort of like this:
[|a1; a2; a3; a4|]
[|f a1 a2; f a3 a4|]
f (f a1 a2) (f a3 a4)
At each stage, all applications of f will take place in parallel. This uses Async.Parallel, which as mentioned above is probably less appropriate than using the Task Parallel Library (and may even be slower than just doing a normal synchronous Array.reduce). As such, just consider this to be demonstration code showing how to piece together the async functions.
let rec reduce f (arr:_[]) =
match arr.Length with
| 0 -> failwith "Can't reduce an empty array"
| 1 -> arr.[0]
| n ->
// Create an array with n/2 tasks, each of which combines neighboring entries using f
Array.init ((n+1)/2)
(fun i ->
async {
// if n is odd, leave last item alone
if n = 2*i + 1 then
return arr.[2*i]
else
return f arr.[2*i] arr.[2*i+1]})
|> Async.Parallel |> Async.RunSynchronously
|> reduce f
Note that converting items to and from Async values all happens internally to this function, so it has the same type as Array.reduce and would be used with your normal compareVariance function rather than with aCompareVariance.

F# Units of measure, problems with genericity

(I'm still banging on with units of measure in F#)
I'm having a problem making 'generic' functions which take 'typed' floats.
The following mockup class is intended to keep tabs on a cumulative error in position, based on a factor 'c'. The compiler doesn't like me saying 0.<'a> in the body of the type ("Unexpected type parameter in unit-of-measure literal").
///Corrects cumulative error in position based on s and c
type Corrector(s_init:float<'a>) =
let deltaS ds c = sin (ds / c) //incremental error function
//mutable values
let mutable nominal_s = s_init
let mutable error_s = 0.<'a> //<-- COMPILER NO LIKE
///Set new start pos and reset error to zero
member sc.Reset(s) =
nominal_s <- s
error_s <- 0.<'a> //<-- COMPILER NO LIKE
///Pass in new pos and c to corrector, returns corrected s and current error
member sc.Next(s:float<'a>, c:float<'a>) =
let ds = s - nominal_s //distance since last request
nominal_s <- s //update nominal s
error_s <- error_s + (deltaS ds c) //calculate cumulative error
(nominal_s + error_s, error_s) //pass back tuple
Another related question, I believe, still to do with 'generic' functions.
In the following code, what I am trying to do is make a function which will take a #seq of any type of floats and apply it to a function which only accepts 'vanilla' floats. The third line gives a 'Value Restriction' error, and I can't see any way out. (Removing the # solves the problem, but I'd like to avoid having to write the same thing for lists, seqs, arrays etc.)
[<Measure>] type km //define a unit of measure
let someFloatFn x = x + 1.2 //this is a function which takes 'vanilla' floats
let MapSeqToNonUnitFunction (x:#seq<float<'a>>) = Seq.map (float >> someFloatFn) x
let testList = [ 1 .. 4 ] |> List.map float |> List.map ((*) 1.0<km>)
MapSeqToNonUnitFunction testList
You can change the first 'compiler no like' to
let mutable error_s : float<'a> = 0.0<_>
and the compiler seems to like that.
As for the second question, I am not seeing the same error as you, and this
[<Measure>] type km
//define a unit of measure
let someFloatFn x = x + 1.2 //this is a function which takes 'vanilla' floats
let MapSeqToNonUnitFunction (x:seq<float<_>>) = Seq.map (float >> someFloatFn) x
let testList = [ 1 .. 4 ] |> List.map float |> List.map ((*) 1.0<km>)
let testList2 = testList :> seq<_>
let result = MapSeqToNonUnitFunction testList2
printfn "%A" result
compiles for me (though the upcast to seq<_> is a little annoying, I am not sure if there is an easy way to get rid of it or not).
Aside, I think convention is to name units parameters 'u, 'v, ... rather than 'a, 'b, ...
Units of measure cannot be used as type parameters. This is because the are erased by the compiler during compilation. This question is quite similar:
F# Units of measure - 'lifting' values to float<something>

Resources