Is there a nicer way of achieving the same result as the following code in a nicer way?
if (window == #intToPtr(?*c.GLFWwindow, 0))
I want to check if a pointer to any kind of object (in this case, a nullable [of course] pointer to a GLFWwindow) is NULL. Is there a better way of doing this so that I don't have to write #intToPtr(?*T, 0)) every time I need a NULL pointer (of course a very common occurrence when interfacing with C libraries)
I tried searching for a solution for a while but found nothing. Maybe if there isn't a way to do this specific thing in zig, perhaps there's a way to define a C-like macro GLFWwindowNULL to #intToPtr(?*c.GLFWwindow, 0))?
the null keyword can be used in this situation:
if (window == null)
Related
In F# mantra there seems to be a visceral avoidance of null, Nullable<T> and its ilk. In exchange, we are supposed to instead use option types. To be honest, I don't really see the difference.
My understanding of the F# option type is that it allows you to specify a type which can contain any of its normal values, or None. For example, an Option<int> allows all of the values that an int can have, in addition to None.
My understanding of the C# nullable types is that it allows you to specify a type which can contain any of its normal values, or null. For example, a Nullable<int> a.k.a int? allows all of the values that an int can have, in addition to null.
What's the difference? Do some vocabulary replacement with Nullable and Option, null and None, and you basically have the same thing. What's all the fuss over null about?
F# options are general, you can create Option<'T> for any type 'T.
Nullable<T> is a terrifically weird type; you can only apply it to structs, and though the Nullable type is itself a struct, it cannot be applied to itself. So you cannot create Nullable<Nullable<int>>, whereas you can create Option<Option<int>>. They had to do some framework magic to make that work for Nullable. In any case, this means that for Nullables, you have to know a priori if the type is a class or a struct, and if it's a class, you need to just use null rather than Nullable. It's an ugly leaky abstraction; it's main value seems to be with database interop, as I guess it's common to have `int, or no value' objects to deal with in database domains.
Im my opinion, the .Net framework is just an ugly mess when it comes to null and Nullable. You can argue either that F# 'adds to the mess' by having Option, or that it rescues you from the mess by suggesting that you avoid just null/Nullable (except when absolutely necessary for interop) and focus on clean solutions with Options. You can find people with both opinions.
You may also want to see
Best explanation for languages without null
Because every .NET reference type can have this extra, meaningless value—whether or not it ever is null, the possibility exists and you must check for it—and because Nullable uses null as its representation of "nothing," I think it makes a lot of sense to eliminate all that weirdness (which F# does) and require the possibility of "nothing" to be explicit. Option<_> does that.
What's the difference?
F# lets you choose whether or not you want your type to be an option type and, when you do, encourages you to check for None and makes the presence or absence of None explicit in the type.
C# forces every reference type to allow null and does not encourage you to check for null.
So it is merely a difference in defaults.
Do some vocabulary replacement with Nullable and Option, null and None, and you basically have the same thing. What's all the fuss over null about?
As languages like SML, OCaml and Haskell have shown, removing null removes a lot of run-time errors from real code. To the extent that the original creator of null even describes it as his "billion dollar mistake".
The advantage to using option is that it makes explicit that a variable can contain no value, whereas nullable types leave it implicit. Given a definition like:
string val = GetValue(object arg);
The type system does not document whether val can ever be null, or what will happen if arg is null. This means that repetitive checks need to be made at function boundaries to validate the assumptions of the caller and callee.
Along with pattern matching, code using option types can be statically checked to ensure both cases are handled, for example the following code results in a warning:
let f (io: int option) = function
| Some i -> i
As the OP mentions, there isn't much of a semantic difference between using the words optional or nullable when conveying optional types.
The problem with the built-in null system becomes apparent when you want to express non-optional types.
In C#, all reference types can be null. So, if we relied on the built-in null to express optional values, all reference types are forced to be optional ... whether the developer intended it or not. There is no way for a developer to specify a non-optional reference type (until C# 8).
So, the problem isn't with the semantic meaning of null. The problem is null is hijacked by reference types.
As a C# developer, i wish I could express optionality using the built-in null system. And that is exactly what C# 8 is doing with nullable reference types.
Well, one difference is that for a Nullable<T>, T can only be a struct which reduces the use cases dramatically.
Also make sure to read this answer: https://stackoverflow.com/a/947869/288703
I have a problem understanding co-existence of "null" and Option in F#. In a book I have read that null value is not a proper value in F# because this way F# eliminates the excessive null checking. But it still allows null-initialized references in F#. In other words, you can have null values but you don't have the weapons to defend yourself with. Why not completely replace nulls with Options. Is it because of compatibility issues with .NET libraries or languages that it's still there? If yes can you give an example that shows why it can't be replaced by Option?
F# avoids the use of null when possible, but it lives in the .NET eco-system, so it cannot avoid it completely. In a perfect world, there would be no null values, but you just sometimes need them.
For example, you may need to call .NET method with null as an argument and you may need to check whether a result of .NET method call was null.
The way F# deals with this is:
Null is used when working with types that come from .NET (When you have a value or argument of type declared in .NET, you can use null as a value of that type; you can test if it equals null)
Option is needed when working with F# types, because values of types declared in F# cannot be null (And compiler prohibits using null as a value of these types).
In F# programming, you'll probably use option when working with .NET types as well (if you have control over how their values are created). You'll just never create null value and then use option to have a guarantee that you'll always handle missing values correctly.
This is really the only option. If you wanted to view all types as options implicitly when accessing .NET API, pretty much every method would look like:
option<Control> GetNextChild(option<Form> form, option<Control> current);
...programming with API like this would be quite painful.
I am having a brain freeze on f#'s option types. I have 3 books and read all I can but I am not getting them.
Does someone have a clear and concise explanation and maybe a real world example?
TIA
Gary
Brian's answer has been rated as the best explanation of option types, so you should probably read it :-). I'll try to write a more concise explanation using a simple F# example...
Let's say you have a database of products and you want a function that searches the database and returns product with a specified name. What should the function do when there is no such product? When using null, the code could look like this:
Product p = GetProduct(name);
if (p != null)
Console.WriteLine(p.Description);
A problem with this approach is that you are not forced to perform the check, so you can easily write code that will throw an unexpected exception when product is not found:
Product p = GetProduct(name);
Console.WriteLine(p.Description);
When using option type, you're making the possibility of missing value explicit. Types defined in F# cannot have a null value and when you want to write a function that may or may not return value, you cannot return Product - instead you need to return option<Product>, so the above code would look like this (I added type annotations, so that you can see types):
let (p:option<Product>) = GetProduct(name)
match p with
| Some prod -> Console.WriteLine(prod.Description)
| None -> () // No product found
You cannot directly access the Description property, because the reuslt of the search is not Product. To get the actual Product value, you need to use pattern matching, which forces you to handle the case when a value is missing.
Summary. To summarize, the purpose of option type is to make the aspect of "missing value" explicit in the type and to force you to check whether a value is available each time you work with values that may possibly be missing.
See,
http://msdn.microsoft.com/en-us/library/dd233245.aspx
The intuition behind the option type is that it "implements" a null-value. But in contrast to null, you have to explicitly require that a value can be null, whereas in most other languages, references can be null by default. There is a similarity to SQLs NULL/NOT NULL if you are familiar with those.
Why is this clever? It is clever because the language can assume that no output of any expression can ever be null. Hence, it can eliminate all null-pointer checks from the code, yielding a lot of extra speed. Furthermore, it unties the programmer from having to check for the null-case all the same, should he or she want to produce safe code.
For the few cases where a program does require a null value, the option type exist. As an example, consider a function which asks for a key inside an .ini file. The key returned is an integer, but the .ini file might not contain the key. In this case, it does make sense to return 'null' if the key is not to be found. None of the integer values are useful - the user might have entered exactly this integer value in the file. Hence, we need to 'lift' the domain of integers and give it a new value representing "no information", i.e., the null. So we wrap the 'int' to an 'int option'. Now, if there is no integer value we will get 'None' and if there is an integer value, we will get 'Some(N)' where N is the integer value in question.
There are two beautiful consequences of the choice. One, we can use the general pattern match features of F# to discriminate the values in e.g., a case expression. Two, the framework of algebraic datatypes used to define the option type is exposed to the programmer. That is, if there were no option type in F# we could have created it ourselves!
Assume you have a variety of number or int based variables that you want to be initialized to some default value. But using 0 could be problematic because 0 is meaningful and could have side affects.
Are there any conventions around this?
I have been working in Actionscript lately and have a variety of value objects with optional parameters so for most variables I set null but for numbers or ints I can't use null. An example:
package com.website.app.model.vo
{
public class MyValueObject
{
public function MyValueObject (
_id:String=null,
_amount:Number=0,
_isPurchased:Boolean=false
)
{ // Constructor
if( _id != null ) this.id = _id;
if( _amount != 0 ) this.amount = _amount;
if( _isPurchased != false ) this.isPurchased = _isPurchased;
}
public var id:String;
public var amount:Number;
public var isPurchased:Boolean;
}
}
The difficulty is that using 0 in the above code might be problematic if the value is not ever changed from its initial value. It is easy to detect if a variable has a null value. But detecting 0 may not be so easy because 0 might be a legitimate value. I want to set a default value to make the parameter optional but I also want to later detect in my code if the value was changed from its default without hard to debug side affects.
I suppose I could use something like -1 for a value. I was wondering if there are any well known coding conventions for this kind of thing? I suppose it depends on the nature of the variable and the data.
This is first my stack overflow question. Hopefully the gist of my question makes sense.
A lot of debuggers will use 0xdeadbeef for initializing registers. I always get a chuckle when I see that.
But, in all honesty, your question contains its own answer - use a value that your variable is not ever expected to become. It doesn't matter what the value is.
Since you asked in a comment I'll talk a little bit about C and C++. For efficiency reasons local variables and allocated memory are not initialized by default. But debug builds often do this to help catch errors. A common value used is 0xcdcdcdcd which is reasonably unlikely. It has the high bit set and is either a rather large unsigned or rather large negative signed number. As a pointer address it is odd which will cause an alignment exception if used on anything but a char (but not on X86). It has no special meaning as a 32 bit floating point number so it isn't a perfect choice.
Occasionally you'll see a partially aligned value in a variable such as 0xcdcd0000 or 0x0000cdcd. These can be treated as suspcious at the very least.
Sometimes different values will be used depending on the allocation area of library. That gives you a clue where a bad value may have originated (i.e., it itself wasn't initialized but it was copied from an unititialized value).
The ideal value would be invalid no matter what alignment you read from memory and is invalid over all primitive types. It also should look suspicious to a human so even if they do not know the convention they can suspect something is a foot. That's why 0xdeadbeef can be a good choice because the (hex viewing) programmer will recognize that as the work of a human and not random chance. Note also that it is odd and has the high bit set so it has that going for it.
The value -1 is often traditionally used as an "out of range" or "invalid" value to indicate failure or non-initialised data. Then again, that goes right down the pan if -1 is a semantically valid value for the variable...or you're using an unsigned type.
You seem to like null (and for a good reason), so why not just use it throughout?
In ActionScript you can only assign Number.NaN to variables that are typed Number, not int or uint.
That being said, because AS3 does not support named arguments you can always look at the arguments array (it's a built-in array that all functions have, unless you use the ...rest construct). If that array's length is less than the position of your numeric argument you know it wasn't passed in.
I often use a maximum value for this. As you say, zero often is a valid value. Generally max-int, while theoretically valid, is safe to exclude. But not always; be careful.
I like 0xD15EA5ED, it's similar to 0xDEADBEEF but is usually more accurate when debugging.
I've seen this around, but never heard a clear explanation of why... This is really for any language, not just C# or VB.NET or Perl or whatever.
When comparing two items, sometimes the "check" value is put on the left side instead of the right. Logically to me, you list your variable first and then the value to which you're comparing. But I've seen the reverse, where the "constant" is listed first.
What (if any) gain is there to this method?
So instead of:
if (myValue > 0)
I've seen:
if (0 < myValue)
or
if (Object.GimmeAnotherObject() != null)
is replaced with:
if (null != Object.GimmeAnotherObject())
Any ideas on this?
TIA!
Kevin
Some developers put the constant on the left like so
if(0 = myValue)
This is because you will get an error from the compiler, since you can't assign 0 a value. Instead, you will have to change it to
if(0 == myValue)
This prevents lots of painful debugging down the road, since typing
if(myValue = 0)
is perfectly legal, but most likely you meant
if(myValue == 0)
The first choice is not what you want. It will subtly change your program and cause all sorts of headaches. Hope that clarifies!
I don't think a simple rule like but the constant first or but the constant last is not a very smart choice. I believe the check should express it semantics. For example I prefer to use both versions in a range check.
if ((low <= value) && (value <= high))
{
DoStuff(value);
}
But I agree on the examples you mentioned - I would but the constant last and can see no obviouse reason for doing it the other way.
if (object != null)
{
DoStuff(object);
}
In C++ both these are valid and compile
if(x == 1)
and
if(x=1)
but if you write it like this
if(1==x)
and
if(1=x)
then the assignment to 1 is caught and the code won't compile.
It's considered "safer" to put the const variable on the left hand side.
Once you get into the habit of putting the const variable on the left for assignment it tends to become your default mode of operation, that's why you see it showing up in equality checks as well
For .Net, it's irrelevant, as the compiler won't let you make an assignment in a condition like:
if(x=1)
because (as everyone else said) it's bad practice (because it's easy to miss).
Once you don't have to worry about that, it's slightly more readable to put the variable first and the value second, but that's the only difference - both sides need to be evaluated.
Its a coding practice to catch typos like '!=' typed as '=' for example.
If you have a CONSTANT on the left all assignment operators will be caught by the compiler since you cannot assign to a constant.
Many languages (specifically C) allow a lot of flexibility in writing code. While, the constant on the left seems unusual to you, you can also program assignments and conditionals together as,
if (var1 = (var2 & var3)) { /* do something */ }
This code will get the boolean result into var1 and also /* do something */ if the result is true.
A related coding practice is to avoid writing code where conditional expressions have assignments within them; though the programming language allows such things. You do not come across such code a lot since assignments within conditionals is unusual so typical code does not have such things.
There is a good C language coding practices article at the IBM DeveloperWorks site that is probably still relevant for people writing in that language.