Warn against BOOL to ENUM conversion in XCode - ios

Previously I had a method that does some task which takes in a BOOL.
-(void)processUserBasedonStatus:(BOOL)newUser
I then defined an NS_ENUM to accept more states like this
typedef NS_ENUM(NSInteger, Status)
{
StatusNewUser=0,
StatusCurrentUser,
StatusOldUser,
StatusOther
};
And updated the method to use the new ENUM param
-(void)processUserBasedonStatus:(Status)userStatus
Everything works well, except that Xcode didn't complain about some places where I forgot to update the calling method, i.e.
[self processUserBasedonStatus:YES];
[self processUserBasedonStatus:NO];
In this case, YES & NO will only map to the top 2 values of the ENUM. I experimented with the list of warnings in Xcode but nothing allows the compiler to bring up this warning.
Is there a way to enable the compiler to warn us about this type of behavior?

In (Objective-)C the types _Bool, char, int, long et al and their unsigned counterparts are all integer types. Types defined by enum are not distinct types per se but a collection of constants of their underlying integer type.
The Objective-C type BOOL is either a synonym (typedef or #define) for _Bool or one of the char types, depending on the version of the compiler.
Due to integer promotions and integer conversions many assignments between different integer types are automatic and silent. Even floating point types can get involved, e.g.:
bool b = true;
float f = b;
is perfectly legal and might well go uncommented by a compiler even as a warning.
Given all this the answer to your question:
Is there a way to enable the compiler to warn us about this type of behaviour?
is currently it seems “no” – the code is perfectly legal in (Objective-)C. It is however dubious and could indicate an error and there is nothing stopping the compiler issuing a warning as it does for some other legal but dubious constructs. You could submit a report to Apple at bugreport.apple.com (you need an Apple ID to do this) suggesting they add such a warning.

Related

How can a struct be unsized? [duplicate]

This question already has answers here:
How do you actually use dynamically sized types in Rust?
(4 answers)
Closed 2 years ago.
My mental model of data layout in Rust was that all structs' sizes have to be known at compile-time, which means all of their properties have to be known at compile-time recursively. This is why you can't have a struct member that's simply a trait (and why enums have to be used in place of union types): the size can't be known, so instead you have to use either
A generic, so the trait gets "reified" to a size-known struct at usage time
An enum, which has a finite set of possible size-known layouts
A Box, whose size is known because it's just a pointer
But in the docs for Path, it says:
This is an unsized type, meaning that it must always be used behind a pointer like & or Box. For an owned version of this type, see PathBuf.
Yet Path is neither a trait nor a generic struct, it's just a plain struct.
What's wrong with my mental model that this can be possible?
I found this explanation of what dynamically-sized types are, but I still don't understand, how I would make one of my own. Is doing so a special privilege reserved for the language itself?
A struct is allowed to contain a single unsized field, and this makes the struct itself unsized. Constructing a value of an unsized type can be very difficult, and can usually only be done by using unsafe to cast a reference to a sized variant to the unsized variant.
For example, one might use #[repr(transparent)] to make it possible to cast an &[u8] to a &MySlice like this. This attribute makes the type guaranteed to be represented in the same way as its single field.
#[repr(transparent)]
struct MySlice {
inner: [u8],
}
It would then be sound to convert slices like this:
impl MySlice {
pub fn new(slice: &[u8]) -> &MySlice {
// SAFETY: This is ok because MySlice is #[repr(transparent)]
unsafe {
&*(slice as *const [u8] as *const MySlice)
}
}
}
You could e.g. refuse to perform the conversion if the slice was not valid ascii, and now you would have a byte array that was guaranteed to point to valid ascii, just like how &str is an &[u8] that is guaranteed to be valid utf-8.

Why does Apple recommend defining enumeration types as NSInteger?

In Apple's Adopting Modern Objective-C document, it is specified that:
The type for enumerations should be NSInteger.
In addition, for the NS_OPTION macro, they say:
However, the type for options should usually be NSUInteger.
Most of the time the enumeration values I use are never negative, therefore I define them as NSUInteger instead.
What's the rationale behind this decision?
Someone at Apple knew about this new language named "Swift". Swift is very strict with types, so having a mixture of signed and unsigned values is very inconvenient. Therefore, you should use NSInteger even if you know that values are never negative.
Enumeration is not guaranteed to have only positive values.
In some cases -1 may be used to mark error.
Here Apple covers this topic in detail

Access Delphi record fields via . or ^

I came across something in the Delphi language that I hadn't noticed before. Consider a simple record and a pointer to that record:
TRecord = record
value : double;
end;
PTRecord = ^TRecord;
Now declare a variable of type PTRecord:
var x : PTRecord;
and create some space:
x := new (PTRecord);
I noticed that I can access the value field using both the '.' notation and the '^.' notation. Thus the following two lines appear to be operationally equivalent, the compiler doesn't complain and runtime works fine:
x.value := 4.5;
x^.value := 2.3;
I would have thought that '^.' is the correct and only way to access value? My question is, is it ok to use the simpler dot notation or will I run into trouble if I don't use the pointer indirection '^.'? Perhaps this is well known behavior but it's the first time I've come across it.
It is perfectly sound and safe to omit the caret. Of course, logic demands the caret, but since the expression x.value doesn't have a meaningful intepretation in its own, the compiler will assume you actually mean x^.value. This feature is a part of the so-called 'extended syntax'. You can read about this at the documentation.
When Extended syntax is in effect (the default), you can omit the caret when referencing pointers.
Delphi has supported that syntax for close to two decades. When you use the . operator, the compiler will apply the ^ operator implicitly. Both styles are correct. There is no chance for your program doing the wrong thing since there is no case where presence or absence of the ^ will affect the interpretation of the subsequent . operator.
Although this behavior is controlled by the "extended syntax" option, nobody ever disables that option. You can safely rely on it being set in all contexts. It also controls availability of the implicit Result variable, and the way character pointers are compatible with array syntax.
This is called implicit pointer dereferencing for the structured types and inherited from the Delphi 1. This language extension aims to make accessing members of classes (classes are structured types and also instances are implicit pointers) by the means of membership operator (.) only, avoiding requirement for explicit dereference operator (^).
You can safely rely on the presence of this extension in the all Delphi compilers. For more flexibility you can test for the presence of this extension by using $IFOPT X+ conditional directive.

What's the difference between an option type and a nullable type?

In F# mantra there seems to be a visceral avoidance of null, Nullable<T> and its ilk. In exchange, we are supposed to instead use option types. To be honest, I don't really see the difference.
My understanding of the F# option type is that it allows you to specify a type which can contain any of its normal values, or None. For example, an Option<int> allows all of the values that an int can have, in addition to None.
My understanding of the C# nullable types is that it allows you to specify a type which can contain any of its normal values, or null. For example, a Nullable<int> a.k.a int? allows all of the values that an int can have, in addition to null.
What's the difference? Do some vocabulary replacement with Nullable and Option, null and None, and you basically have the same thing. What's all the fuss over null about?
F# options are general, you can create Option<'T> for any type 'T.
Nullable<T> is a terrifically weird type; you can only apply it to structs, and though the Nullable type is itself a struct, it cannot be applied to itself. So you cannot create Nullable<Nullable<int>>, whereas you can create Option<Option<int>>. They had to do some framework magic to make that work for Nullable. In any case, this means that for Nullables, you have to know a priori if the type is a class or a struct, and if it's a class, you need to just use null rather than Nullable. It's an ugly leaky abstraction; it's main value seems to be with database interop, as I guess it's common to have `int, or no value' objects to deal with in database domains.
Im my opinion, the .Net framework is just an ugly mess when it comes to null and Nullable. You can argue either that F# 'adds to the mess' by having Option, or that it rescues you from the mess by suggesting that you avoid just null/Nullable (except when absolutely necessary for interop) and focus on clean solutions with Options. You can find people with both opinions.
You may also want to see
Best explanation for languages without null
Because every .NET reference type can have this extra, meaningless value—whether or not it ever is null, the possibility exists and you must check for it—and because Nullable uses null as its representation of "nothing," I think it makes a lot of sense to eliminate all that weirdness (which F# does) and require the possibility of "nothing" to be explicit. Option<_> does that.
What's the difference?
F# lets you choose whether or not you want your type to be an option type and, when you do, encourages you to check for None and makes the presence or absence of None explicit in the type.
C# forces every reference type to allow null and does not encourage you to check for null.
So it is merely a difference in defaults.
Do some vocabulary replacement with Nullable and Option, null and None, and you basically have the same thing. What's all the fuss over null about?
As languages like SML, OCaml and Haskell have shown, removing null removes a lot of run-time errors from real code. To the extent that the original creator of null even describes it as his "billion dollar mistake".
The advantage to using option is that it makes explicit that a variable can contain no value, whereas nullable types leave it implicit. Given a definition like:
string val = GetValue(object arg);
The type system does not document whether val can ever be null, or what will happen if arg is null. This means that repetitive checks need to be made at function boundaries to validate the assumptions of the caller and callee.
Along with pattern matching, code using option types can be statically checked to ensure both cases are handled, for example the following code results in a warning:
let f (io: int option) = function
| Some i -> i
As the OP mentions, there isn't much of a semantic difference between using the words optional or nullable when conveying optional types.
The problem with the built-in null system becomes apparent when you want to express non-optional types.
In C#, all reference types can be null. So, if we relied on the built-in null to express optional values, all reference types are forced to be optional ... whether the developer intended it or not. There is no way for a developer to specify a non-optional reference type (until C# 8).
So, the problem isn't with the semantic meaning of null. The problem is null is hijacked by reference types.
As a C# developer, i wish I could express optionality using the built-in null system. And that is exactly what C# 8 is doing with nullable reference types.
Well, one difference is that for a Nullable<T>, T can only be a struct which reduces the use cases dramatically.
Also make sure to read this answer: https://stackoverflow.com/a/947869/288703

fortran save integer

I came across some code today that looked somewhat like this:
subroutine foo()
real blah
integer bar,k,i,j,ll
integer :: n_called=1
save integer
...
end
It seems like the intent here was probably save n_called, but is that even a valid statment to save all integers -- or is it implicitly declaring a variable named integer and saving it?
The second interpretation is correct. Fortran has many keywords, INTEGER being one of them, but it has no reserved words, which means that keywords can be used as identifiers, though this is usually a terrible idea (but nevertheless it carries on to C# where one can prefix a keyword with # and use it as an identifier, right?)
The SAVE statement, even if it was intended for n_called is superficial. Fortran automatically saves all variables that have initialisers and that's why the code probably works as intended.
integer :: n_called=1
Here n_called is automatically SAVE. This usually comes as a really bad surprise to C/C++ programmers forced to maintain/extend/create new Fortran code :)
I agree with your 2nd interpretation, that is, the statement save integer implicitly declares a variable called integer and gives it the save attribute. Fortran, of course, has no rule against using keywords as program entity names, though most sensible software developers do have such a rule.
If I try to compile your code snippet as you have presented it, my compiler (Intel Fortran) makes no complaint. If I insert implicit none at the right place it reports the error
This name does not have a type, and must have an explicit type. [INTEGER]
The other interpretation, that it gives the save attribute to all integer variables, seems at odds with the language standards and it's not a variation that I've ever come across.

Resources