In this method:
datatype Results = Foo | Bar
method test() returns (r:Result)
{
}
Dafny verifies OK and test() returns Foo. Which is technically correct (it does return a value of the correct type) however I was expecting Dafny to complain that the result has not been set by the method itself. What test() is doing is similar to doing:
return;
in a C function that is supposed to return an int.
Is there a way to make Dafny verify that a methods results are always set before the method returns?
The flag you want is /definiteAssignment:2:
/definiteAssignment:<n>
0 - ignores definite-assignment rules; this mode is for testing only--it is
not sound
1 (default) - enforces definite-assignment rules for compiled variables and fields
whose types do not support auto-initialization and for ghost variables
and fields whose type is possibly empty
2 - enforces definite-assignment for all non-yield-parameter
variables and fields, regardless of their types
3 - like 2, but also performs checks in the compiler that no nondeterministic
statements are used; thus, a program that passes at this level 3 is one
that the language guarantees that values seen during execution will be
the same in every run of the program
This is what Dafny says on your code says with that flag:
test.dfy(5,0): Error: out-parameter 'r', which is subject to definite-assignment rules, might be uninitialized at this return point
Related
I have a small helper proc that is supposed to tell me at compile-time whether a type is an object-type or not.
func isObject*[T](val: typedesc[T]): bool {.compileTime.} = T is (object or ref object)
However, when I call this proc with a simple echo to see whether it works, I receive an error:
type A = object
echo isObject(A)
Error: request to generate code for .compileTime proc: isObject
Why is that? It should be perfectly valid to just call this, isObject should just compile to true and in the end what's written there is echo true, why does this cause this cryptic error?
The problem here is that runtime code (The echo call) is trying to work with a compiletime proc.
That is not valid, as the compiler would not replace the function-call with its result, but try to actually call the function at runtime instead. The compiler knows this is invalid behaviour and thus prohibits it by throwing an error, albeit one that isn't that useful.
The only way this can be allowed is if you store the result of the compile-time proc in a compile-time variable, aka a const. These are allowed to be used at runtime.
So the calling code would look more like this instead:
type A = object
const x = isObject(A)
echo x
EDIT:
As Elegantbeef pointed out on nim's discord:
Another alternative is to just do what I thought would happen initially and have that isObject(A) call evaluate fully at compile-time, so that at runtime it goes away and all that's left is it's result, true.
To do so, just use static:
type A = object
echo static(isObject(A))
Can someone simply explain to me how null assertion (!) works and when to use it?
The ! operator can be used after any expression, e!.
That evaluates the expression e to value v, then checks whether v is the null value. If it is null, an error is thrown. If not, then e! also evaluates to v.
The static type of an expression e! is (basically) the static type of e with any trailing ?s remove. So, if e has type int?, the type of e! is int.
You should not use e! unless e can be null (the type of e is potentially nullable).
The ! operator is dynamically checked. It can throw at runtime, and there is no static check which can guarantee that it won't. It's like using a value with type dynamic in that all the responsibility of preventing it from throwing is on the author, the compiler can't help you, and you need good tests to ensure that it won't throw when it's not supposed to.
It's called an assertion because it should never throw in production code.
So, use e! when you know (for some reason not obvious to the compiler, perhaps because of some invariant guaranteeing that the value is not null while something else is true) that e is not null.
Example:
abstract class Box<T extends Object> {
bool hasValue;
T? get value;
}
...
Box<int> box = ...;
if (box.hasValue) {
var value = box.value!;
... use value ...
}
If you are repeatedly using ! on the same expression, do consider whether it's more efficient to read it into a local variable just once.
Also, if (like this Box example) the value being null is equivalent to the other test you just did, maybe just check that directly:
Box<int> box = ...;
var value = box.value;
if (value != null) {
... use value ...
}
This code, with an explicit != null check on a local variable, is statically guaranteed to not throw because the value is null.
The code using ! above relies on the author to maintain whichever invariant allowed them to write the !, and if something changes, the code might just start throwing at runtime. You can't tell whether it's safe just by looking at the code locally.
Use ! sparingly, just like the dynamic type and late declarations, because they're ways to side-step the compiler's static checking and ensure it that "this is going to be fine". That's a great feature when you need it, but it's a risk if you use it unnecessarily.
I'm using the GRDB library to integrate SQLite with my iOS application project. I declared a DatabaseQueue object in AppDelegate.swift like so:
var DB : DatabaseQueue!
In the same file, I had provided a function for connecting the above object to a SQLite database which is called when the app starts running. I had been able to use this in one of my controllers without problems (as in, the app doesn't have problems running using the database I connected to it), like so:
var building : Building?
do {
try DB.write { db in
let building = Building.fetchOne(db, "SELECT * FROM Building WHERE number = ?", arguments: [bldgNumber])
}
} catch {
print(error)
}
However, in another controller, the same construct is met with an error,
Value of optional type 'DatabaseQueue?' must be unwrapped to refer to member 'write' of wrapped base type 'DatabaseQueue'
with the only difference (aside from the code, of course) being that there are return statements inside the do-catch block, as the latter is inside a function (tableView for numberOfRowsInSection) that is supposed to return an integer. The erroneous section of code is shown below.
var locsCountInFloor : Int
do {
try DB.write { db in
if currentBuilding!.hasLGF == true {
locsCountInFloor = IndoorLocation.filter(bldg == currentBuilding! && level == floor).fetchCount(db)
} else {
locsCountInFloor = IndoorLocation.filter(bldg == currentBuilding! && level == floor + 1).fetchCount(db)
}
return locsCountInFloor
}
} catch {
return 0
}
Any help would be greatly appreciated!
As is often the case when you have a problem with a generic type in Swift, the error message is not helpful.
Here’s the real problem:
DB.write is generic in its argument and return type. It has a type parameter T. The closure argument’s return type is T, and the write method itself returns T.
The closure you’re passing is more than a single expression. It is a multi-statement closure. Swift does not deduce the type of a multi-statement closure from the statements in the closure. This is just a limitation of the compiler, for practical reasons.
Your program doesn’t specify the type T explicitly or otherwise provide constraints that would let Swift deduce the concrete type.
These characteristics of your program mean Swift doesn’t know concrete type to use for T. So the compiler’s type checker/deducer fails. You would expect to get an error message about this problem. (Possibly an inscrutable message, but presumably at least relevant).
But that’s not what you get, because you declared DB as DatabaseQueue!.
Since DB is an implicitly-unwrapped optional, the type checker handles it specially by (as you might guess) automatically unwrapping it if doing so makes the statement type-check when the statement would otherwise not type-check. In all other ways, the type of DB is just plain DatabaseQueue?, a regular Optional.
In this case, the statement won’t type-check even with automatic unwrapping, because of the error I described above: Swift can’t deduce the concrete type to substitute for T. Since the statement doesn’t type-check either way, Swift doesn’t insert the unwrapping for you. Then it carries on as if DB were declared DatabaseQueue?.
Since DatabaseQueue? doesn’t have a write method (because Optional doesn’t have a write method), the call DB.write is erroneous. So Swift wants to print an error message. But it “helpfully” sees that the wrapped type, DatabaseQueue, does have a write method. And by this point it has completely forgotten that DB was declared implicitly-unwrapped. So it tells you to unwrap DB to get to the write method, even though it would have done that automatically if it hadn’t encountered another error in this statement.
So anyway, you need to tell Swift what type to use for T. I suspect you meant to say this:
var locsCountInFloor: Int
do {
locsCountInFloor = try DB.write { db in
...
Assigning the result of the DB.write call to the outer locsCountInFloor is sufficient to fix the error, because you already explicitly defined the type of locsCountInFloor. From that, Swift can deduce the return type of this call to DB.write, and from that the type of the closure.
Good day,
I have problem. I want to simulate some errors in hacklang.
<?hh
namespace Exsys\HHVM;
class HHVMFacade{
private $vector = Vector {1,2,3};
public function echoProduct() : Vector<string>{
return $this->vector;
}
public function test(Vector<string> $vector) : void{
var_dump($vector);
}
}
Function echoProduct() returns Vector of strings. But private property $vector is Vector of integers. When I call echoFunction and returning value use as argument for function test(). I get
object(HH\Vector)#35357 (3) { [0]=> int(1) [1]=> int(2) [2]=> int(3) }
Why? I am expecting some error because types mismatch.
There's two things at play here:
Generics aren't reified, so the runtime has no information about them. This means the runtime is only checking that you're returning a Vector.
$this->vector itself isn't typed. This means the type checker (hh_client) treats it as a unknown type. Unknown types match against everything, so there's no problem returning an unknown type where a Vector<string> is expected.
This is to allow you to gradually type your code. Whenever a type isn't known, the type checker just assumes that the developer knows what's happening.
The first thing I'd do is change the file from partial mode to strict mode, which simply involves changing from <?hh to <?hh // strict. This causes the type checker to complain about any missing type information (as well as a couple of other things, like no superglobals and you can't call non-Hack code).
This produces the error:
test.hh:6:13,19: Please add a type hint (Naming[2001])
If you then type $vector as Vector<int> (private Vector<int> $vector), hh_client then produces:
test.hh:9:16,28: Invalid return type (Typing[4110])
test.hh:8:44,49: This is a string
test.hh:6:20,22: It is incompatible with an int
test.hh:8:44,49: Considering that this type argument is invariant with respect to Vector
Which is the error you expected. You can also get this error simply by adding the type to $vector, without switching to strict mode, though I prefer to write my Hack in the strongest mode that the code supports.
With more recent versions of HHVM, the type checker is called whenever Hack code is run (there's an INI flag to turn this off), so causing the type mismatch will also cause execution of the code to fail.
In F# its a big deal that they do not have null values and do not want to support it. Still the programmer has to make cases for None similar to C# programmers having to check != null.
Is None really less evil than null?
The problem with null is that you have the possibility to use it almost everywhere, i.e. introduce invalid states where this is neither intended nor makes sense.
Having an 'a option is always an explicit thing. You state that an operation can either produce Some meaningful value or None, which the compiler can enforce to be checked and processed correctly.
By discouraging null in favor of an 'a option-type, you basically have the guarantee that any value in your program is somehow meaningful. If some code is designed to work with these values, you cannot simply pass invalid ones, and if there is a function of option-type, you will have to cover all possibilities.
Of course it is less evil!
If you don't check against None, then it most cases you'll have a type error in your application, meaning that it won't compile, therefore it cannot crash with a NullReferenceException (since None translates to null).
For example:
let myObject : option<_> = getObjectToUse() // you get a Some<'T>, added explicit typing for clarity
match myObject with
| Some o -> o.DoSomething()
| None -> ... // you have to explicitly handle this case
It is still possible to achieve C#-like behavior, but it is less intuitive, as you have to explicitly say "ignore that this can be None":
let o = myObject.Value // throws NullReferenceException if myObject = None
In C#, you're not forced to consider the case of your variable being null, so it is possible that you simply forget to make a check. Same example as above:
var myObject = GetObjectToUse(); // you get back a nullable type
myObject.DoSomething() // no type error, but a runtime error
Edit: Stephen Swensen is absolutely right, my example code had some flaws, was writing it in a hurry. Fixed. Thank you!
Let's say I show you a function definition like this:
val getPersonByName : (name : string) -> Person
What do you think happens when you pass in a name of a person who doesn't exist in the data store?
Does the function throw a NotFound exception?
Does it return null?
Does it create the person if they don't exist?
Short of reading the code (if you have access to it), reading the documentation (if someone was kindly enough to write it), or just calling the function, you have no way of knowing. And that's basically the problem with null values: they look and act just like non-null values, at least until runtime.
Now let's say you have a function with this signature instead:
val getPersonByName : (name : string) -> option<Person>
This definition makes it very explicit what happens: you'll either get a person back or you won't, and this sort of information is communicated in the function's data type. Usually, you have a better guarantee of handling both cases of a option type than a potentially null value.
I'd say option types are much more benevolent than nulls.
In F# its a big deal that they do not have null values and do not want to support it. Still the programmer has to make cases for None similar to C# programmers having to check != null.
Is None really less evil than null?
Whereas null introduces potential sources of run-time error (NullRefereceException) every time you dereference an object in C#, None forces you to make the sources of run-time error explicit in F#.
For example, invoking GetHashCode on a given object causes C# to silently inject a source of run-time error:
class Foo {
int m;
Foo(int n) { m=n; }
int Hash() { return m; }
static int hash(Foo o) { return o.Hash(); }
};
In contrast, the equivalent code in F# is expected to be null free:
type Foo =
{ m: int }
member foo.Hash() = foo.m
let hash (o: Foo) = o.Hash()
If you really wanted an optional value in F# then you would use the option type and you must handle it explicitly or the compiler will give a warning or error:
let maybeHash (o: Foo option) =
match o with
| None -> 0
| Some o -> o.Hash()
You can still get NullReferenceException in F# by circumventing the type system (which is required for interop):
> hash (box null |> unbox);;
System.NullReferenceException: Object reference not set to an instance of an object.
at Microsoft.FSharp.Core.LanguagePrimitives.IntrinsicFunctions.UnboxGeneric[T](Object source)
at <StartupCode$FSI_0021>.$FSI_0021.main#()
Stopped due to error