I would like to get the initializer in the field corrected_time in code below. I found the field.initializer, but couldn't get much further. (the #Init annotation is temporary solution for now):
mixin PrerenderDoc on Doc implements AllowDelete {
#Init(init_int: 0)
int corrected_time = 0;
}
I'm guessing that field is an instance of FieldElement. Unfortunately, if that's the case, then the answer is that analyzer doesn't have a value for the initializer. The analyzer only computes values for (a subset of) expressions that are constant expressions. For field initializers, that means that the field needs to be declared to be const, and the one in the example isn't.
(Annotations are constants and hence have values, which is why your workaround works.)
If the field were declared const, then you could use VariableElement.constantValue to access a representation of the value (VariableElement is a superclass of FieldElement).
The other option available to you is to use the AST structure and examine the structure of the expression, but if you want / need to handle anything more than just simple literal values, that can be quite complex.
Related
When we write:
"exampleString".hashCode
Is there a genral mathematical way, some algorithm that calculates it or it gets it from somewhere else in Dart laungage?
And is the hashcode of a String value still the same in another languages like java, c++...?
To answer your questions, hashcodes are Dart specific, and perhaps even execution-run specific. To see how the various hashcodes work, see the source code for any given class.
The Fine Manual says:
A hash code is a single integer which represents the state of the object that affects operator == comparisons.
All objects have hash codes. The default hash code implemented by Object represents only the identity of the object, the same way as the default operator == implementation only considers objects equal if they are identical (see identityHashCode).
If operator == is overridden to use the object state instead, the hash code must also be changed to represent that state, otherwise the object cannot be used in hash based data structures like the default Set and Map implementations.
Hash codes must be the same for objects that are equal to each other according to operator ==. The hash code of an object should only change if the object changes in a way that affects equality. There are no further requirements for the hash codes. They need not be consistent between executions of the same program and there are no distribution guarantees.
Objects that are not equal are allowed to have the same hash code. It is even technically allowed that all instances have the same hash code, but if clashes happen too often, it may reduce the efficiency of hash-based data structures like HashSet or HashMap.
If a subclass overrides hashCode, it should override the operator == operator as well to maintain consistency.
That last point is important. If two objects are considered == by whatever strategy you want, they must also always have the same hashcode. The inverse is not necessarily true.
Is int type in Dart a value type or reference type?
What is "reference type","value type","canonicalized" in Dart? (concrete definition)
I was looking into the specific definitions of "reference type" and "value type", but in the end, I thought it would be good to understand the code that represents the difference and the behavior (result) of that code.
If possible, I would like Dart code, but if it has the same structure as Dart regarding the above issues, there is no problem in other languages, so I would appreciate it if you could give me sample code.
The meaning of "reference type" and "value type" used here is the distinction made by C#.
A reference type is a traditional object-oriented object. It has identity which it preserves over time. You can have two objects with the same state, but they are different objects.
You don't store reference type values directly into variables. Instead you store a reference to them. When you access members of the object, you do so through the reference (x.foo means "evaluate x to a reference, then access the foo member of the object referenced by that reference).
When you check for identity, identical(a, b), you check whether two different references refer to the same object.
A value type does not have identity. It's defined entirely in terms of its state, its contents. When you assign a value type to a variable, you make a copy of it. It's the "value" in call-by-value.
C# allows you to declare value types with that behavior.
(Reference types do not correspond to call-by-reference, rather to call-by-sharing.)
In Dart, all objects are reference objects. This differs from, say, Java, where an int is a value-type ("primitive type" in their parlance), and they have a separate Integer class which can wrap the int values and is a reference type. That's needed because Integer is a subtype of Object, but int isn't, so you can't store int values in a generic container which expects a subtype of Object.
In Dart, some reference types are still more value-like than others. In particular numbers.
A Dart number, like the integer 1, has precisely one object representing that value. If you write 1 in different places of the program, or even calculate 4 ~/ 4 or 3 - 2, you always get the same object representing the value 1. It's like it's not an object at all, and identical only compares values. And that's precisely what happens, because it allows the compiler to treat integers as primitive values internally, and not worry about sometimes having to give you the same 1 object back that you put in.
This is sometimes expressed as integers being "canonicalized": There is only one canonical instance representing each state.
Dart also canonicalizes other objects.
Constant expressions are canonicalized, so that two different constant expressions generating instances of the same type, with identical states, are made to return the same, canonical, instance representing that value.
Since the value is guaranteed to be immutable, there is no need to have equal objects with different identities. (If they were mutable, it matters very much which one you mutate and which one you don't).
Type inference in F# doesn't seem to work very well with parameters that are supposed to take values of a class type.
Is there a way to avoid explicit type annotation on such parameters?
It looks like a problem because when there are some 5 of such parameters each of which requires its pair of parentheses and a colon and the name of a type it looks much messier than the same declaration in C# which is known for being more syntactically noisy.
So that instead of
let writeXmlAttribute (writer: XmlWriter) name value = ()
I wish I could write something like
let writeXmlAttribute writer name value = () // <-- a problem when in comes to writer.WriteStartAttribute name
Is there a way I can get away with it?
UPDATE:
There is no such problem with records, only on classes.
If your primary reason for wanting to avoid this is a cleaner signature, you could move the explicit typing into the function with an upcast (which will infer the parameter type due to it being a compile-time determination). You're not avoiding it, however, you're just moving it.
let writeXmlAttribute writer name value =
(writer :> XmlWriter).WriteStartAttribute(name, value)
F# has difficulty with the kind of inference you're asking for in relation to members (including members on records), so you will have to do at least a minimal amount of explicit typing in any case.
In F# mantra there seems to be a visceral avoidance of null, Nullable<T> and its ilk. In exchange, we are supposed to instead use option types. To be honest, I don't really see the difference.
My understanding of the F# option type is that it allows you to specify a type which can contain any of its normal values, or None. For example, an Option<int> allows all of the values that an int can have, in addition to None.
My understanding of the C# nullable types is that it allows you to specify a type which can contain any of its normal values, or null. For example, a Nullable<int> a.k.a int? allows all of the values that an int can have, in addition to null.
What's the difference? Do some vocabulary replacement with Nullable and Option, null and None, and you basically have the same thing. What's all the fuss over null about?
F# options are general, you can create Option<'T> for any type 'T.
Nullable<T> is a terrifically weird type; you can only apply it to structs, and though the Nullable type is itself a struct, it cannot be applied to itself. So you cannot create Nullable<Nullable<int>>, whereas you can create Option<Option<int>>. They had to do some framework magic to make that work for Nullable. In any case, this means that for Nullables, you have to know a priori if the type is a class or a struct, and if it's a class, you need to just use null rather than Nullable. It's an ugly leaky abstraction; it's main value seems to be with database interop, as I guess it's common to have `int, or no value' objects to deal with in database domains.
Im my opinion, the .Net framework is just an ugly mess when it comes to null and Nullable. You can argue either that F# 'adds to the mess' by having Option, or that it rescues you from the mess by suggesting that you avoid just null/Nullable (except when absolutely necessary for interop) and focus on clean solutions with Options. You can find people with both opinions.
You may also want to see
Best explanation for languages without null
Because every .NET reference type can have this extra, meaningless value—whether or not it ever is null, the possibility exists and you must check for it—and because Nullable uses null as its representation of "nothing," I think it makes a lot of sense to eliminate all that weirdness (which F# does) and require the possibility of "nothing" to be explicit. Option<_> does that.
What's the difference?
F# lets you choose whether or not you want your type to be an option type and, when you do, encourages you to check for None and makes the presence or absence of None explicit in the type.
C# forces every reference type to allow null and does not encourage you to check for null.
So it is merely a difference in defaults.
Do some vocabulary replacement with Nullable and Option, null and None, and you basically have the same thing. What's all the fuss over null about?
As languages like SML, OCaml and Haskell have shown, removing null removes a lot of run-time errors from real code. To the extent that the original creator of null even describes it as his "billion dollar mistake".
The advantage to using option is that it makes explicit that a variable can contain no value, whereas nullable types leave it implicit. Given a definition like:
string val = GetValue(object arg);
The type system does not document whether val can ever be null, or what will happen if arg is null. This means that repetitive checks need to be made at function boundaries to validate the assumptions of the caller and callee.
Along with pattern matching, code using option types can be statically checked to ensure both cases are handled, for example the following code results in a warning:
let f (io: int option) = function
| Some i -> i
As the OP mentions, there isn't much of a semantic difference between using the words optional or nullable when conveying optional types.
The problem with the built-in null system becomes apparent when you want to express non-optional types.
In C#, all reference types can be null. So, if we relied on the built-in null to express optional values, all reference types are forced to be optional ... whether the developer intended it or not. There is no way for a developer to specify a non-optional reference type (until C# 8).
So, the problem isn't with the semantic meaning of null. The problem is null is hijacked by reference types.
As a C# developer, i wish I could express optionality using the built-in null system. And that is exactly what C# 8 is doing with nullable reference types.
Well, one difference is that for a Nullable<T>, T can only be a struct which reduces the use cases dramatically.
Also make sure to read this answer: https://stackoverflow.com/a/947869/288703
I am having a brain freeze on f#'s option types. I have 3 books and read all I can but I am not getting them.
Does someone have a clear and concise explanation and maybe a real world example?
TIA
Gary
Brian's answer has been rated as the best explanation of option types, so you should probably read it :-). I'll try to write a more concise explanation using a simple F# example...
Let's say you have a database of products and you want a function that searches the database and returns product with a specified name. What should the function do when there is no such product? When using null, the code could look like this:
Product p = GetProduct(name);
if (p != null)
Console.WriteLine(p.Description);
A problem with this approach is that you are not forced to perform the check, so you can easily write code that will throw an unexpected exception when product is not found:
Product p = GetProduct(name);
Console.WriteLine(p.Description);
When using option type, you're making the possibility of missing value explicit. Types defined in F# cannot have a null value and when you want to write a function that may or may not return value, you cannot return Product - instead you need to return option<Product>, so the above code would look like this (I added type annotations, so that you can see types):
let (p:option<Product>) = GetProduct(name)
match p with
| Some prod -> Console.WriteLine(prod.Description)
| None -> () // No product found
You cannot directly access the Description property, because the reuslt of the search is not Product. To get the actual Product value, you need to use pattern matching, which forces you to handle the case when a value is missing.
Summary. To summarize, the purpose of option type is to make the aspect of "missing value" explicit in the type and to force you to check whether a value is available each time you work with values that may possibly be missing.
See,
http://msdn.microsoft.com/en-us/library/dd233245.aspx
The intuition behind the option type is that it "implements" a null-value. But in contrast to null, you have to explicitly require that a value can be null, whereas in most other languages, references can be null by default. There is a similarity to SQLs NULL/NOT NULL if you are familiar with those.
Why is this clever? It is clever because the language can assume that no output of any expression can ever be null. Hence, it can eliminate all null-pointer checks from the code, yielding a lot of extra speed. Furthermore, it unties the programmer from having to check for the null-case all the same, should he or she want to produce safe code.
For the few cases where a program does require a null value, the option type exist. As an example, consider a function which asks for a key inside an .ini file. The key returned is an integer, but the .ini file might not contain the key. In this case, it does make sense to return 'null' if the key is not to be found. None of the integer values are useful - the user might have entered exactly this integer value in the file. Hence, we need to 'lift' the domain of integers and give it a new value representing "no information", i.e., the null. So we wrap the 'int' to an 'int option'. Now, if there is no integer value we will get 'None' and if there is an integer value, we will get 'Some(N)' where N is the integer value in question.
There are two beautiful consequences of the choice. One, we can use the general pattern match features of F# to discriminate the values in e.g., a case expression. Two, the framework of algebraic datatypes used to define the option type is exposed to the programmer. That is, if there were no option type in F# we could have created it ourselves!