Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 8 years ago.
Improve this question
Can I create a normal variable in Swift (I mean a non-optional) and assign a nil value to it or later during the app lifecycle, let it be nil?
It confuses me, since it's a little strange compared to traditionally strong programming languages, like Java and C#.
No, that's not possible by design. This excerpt from the documentation explains why:
The concept of optionals doesn’t exist in C or Objective-C. The nearest thing in Objective-C is the ability to return nil from a method that would otherwise return an object, with nil meaning “the absence of a valid object.” However, this only works for objects—it doesn’t work for structures, basic C types, or enumeration values. For these types, Objective-C methods typically return a special value (such as NSNotFound) to indicate the absence of a value. This approach assumes that the method’s caller knows there is a special value to test against and remembers to check for it. Swift’s optionals let you indicate the absence of a value for any type at all, without the need for special constants.
You are describing optionals as a bad thing, whereas is one of the features I appreciate more in the language, because it prevents most of the null pointer exception bugs.
Another advantage is that when a function can return a non-value (nil for reference types in objective C, -1 for integers, etc.), you don't have to choose a value from the spectrum of possible values that a variable of a certain type can have. Not mentioning that it's a convention that both the caller and the function/method must follow.
Last, if you are using too many question and exclamation marks in your code, then you should think about whether or not optionals are really appropriate for the problem (thanks #David for the hint), or taking advantage of optional binding more frequently in all cases where optionals are really needed.
Suggested reading: Optionals
Addendum
Hint: I've frequently seen uses of optionals in cases where a variable is declared, but cannot be initialized contextually. Non optional mutable variables are not required to be declared and initialized in the same line - deferred initialization is allowed, provided that the variable is not used before its initialization. For example:
var x: Int // Variable declared here
for var counter = 0; counter < 10; ++counter {
println(counter)
}
var array = [1, 2, 3]
// ... more lines of code NOT using the x variable
x = 5 // Variable initialized here
print(x)
Hopefully this feature will let you remove several optionals from your code...
Can I create a normal variable in SWIFT (I mean a non Optional) and assign a nil value to it or later during the app lifecycle, let it be nil.
No.
This is easily testable in the playground:
var str = "Hello, playground"
str = nil
The second line will get this error:
Type 'String' does not conform to protocol 'NilLiteralConvertible'
You might want to read more about Swift Literal Convertibles and see an example of how to use it.
You are right, you cannot set a non-optional to nil, although this seems like a burden at first, you gain a lot of safety and readability by giving away a tiny bit of flexibility. Once you get used to it, you will appreciate it more and more.
Related
I was recently working on some test code, building silly view hierarchys and I ran across this little bit of code that made me squint a little extra hard.
var parentView: UIView = UIView() //Warning here
parentView.addSubview(UIImageView())
On the first line is the following warning:
Variable 'parentView' was never mutated; consider changing to 'let' constant
I'm confused as to why this element is not, or why the compiler believes it is not, being mutated. I thought perhaps that Swift was detecting that a collection within the object was being mutated, and not the properties of the object itself. This lead me to attempt the following:
let strings: [String] = []
strings.append("Hi") //This is an error
Given the above code, the only explanation I can come up with is that there is a bug somewhere. Am I missing a keyword somewhere that allows this situation to make sense? Something along the line's of C++'s "mutable" keyword?
If it's not a bug, I'd like to see this scenario re-created in a class that does not inherit from UIObject, so I can understand what causes this.
This is not a bug. For class instances (which are reference types), as e.g. your UIView instance, you may mutate instance properties even if the instance itself is a constant (let). Arrays, on the other hand, are treated as value types in Swift, and members of a constant array (let arr ...) may not be mutated.
Hence, you never actually mutate parentView itself (parentView = UIView() would be a mutation of itself), only its members, and in such a case, it make sense (as the compiler notes) to mark parentView as a constant.
If you create an instance of a structure and assign that instance to a
constant, you cannot modify the instance’s properties, even if they
were declared as variable properties:
...
This behavior is due to structures being value types. When an instance
of a value type is marked as a constant, so are all of its properties.
The same is not true for classes, which are reference types. If you
assign an instance of a reference type to a constant, you can still
change that instance’s variable properties.
From Language Guide - Properties.
...
You’ve actually been using value types extensively throughout the
previous chapters. In fact, all of the basic types in Swift—integers,
floating-point numbers, Booleans, strings, arrays and dictionaries—are
value types, and are implemented as structures behind the scenes.
All structures and enumerations are value types in Swift. This means
that any structure and enumeration instances you create—and any value
types they have as properties—are always copied when they are passed
around in your code.
From Language Guide - Classes and Structures.
I'd like more clarification for optionals in Swift 1.2 e.g the snippet below
var numbers : [Array<String?>?]? = []
var getContact: String?
self.getContact = numbers![indexPath.row]?.first!
println(numbers![indexPath.row]?.first!)
println(getContact)
Optional("1-646-961-1869")
nil
It might help to read SomeType? as “something that might either be a value of type SomeType, or nil.
Based on this, you could read this:
var numbers : [Array<String?>?]? = []
as:
numbers is variable that might contain an array, or nil. In this part of the code it does contain an array, because you’ve assigned [] (an empty array) to it. The array contains values that are either arrays, or nil. And each element of those arrays is either a String, or nil.
What nil means depends on the context. It might indicate failure, or “not set yet”, or something else. For example, in let maybeInt = Int("foo"), maybeInt will be an Int? (which is shorthand for Optional<Int>), and will be set to nil, and in this case the nil means that "foo" cannot be turned into a number. In some other languages, you might instead get an exception, or the number 0 or -1, or a second parameter might be used to indicate failure, but Swift uses optionals.
Optionals can be overused. For example, an optional array is often suspicious – usually an empty array is sufficient. If you choose to make an array optional, ask the question “what is different between the array being nil and the array being empty?”. If there’s no difference, it probably shouldn’t be an optional, just a regular array.
(if the library/framework you’re using likes to traffic in these things without a clear reason, it might be a sign it’s not a very good library)
When you have an optional like numbers here, ! means “I know this is an optional, but I don’t want to deal with the optionality so just assume it’s not nil and if it is, crash.” Crash hard, right there, right then.
So for example, the array property first returns the first element of the array, unless the array is empty (like it is in your example code), in which case it will return nil. So in your example, writing numbers![something] won’t crash because of the !, because you set numbers to be some value. But numbers!.first! might crash if the array at index something is empty.
Lots of ! in your code usually another bad sign, often suggesting that you (or the author of your library) shouldn’t have made something optional because it isn’t ever nil, or that there are failure cases (like that sometimes an array can be empty) that you’re ignoring.
This is a difficult question to answer for a number of reasons. First, this kind of use of optionals is generally frowned upon because it's error-prone and hard to understand. Second, there isn't much context for us to work with. Third, most people are using Swift 2.0 at this point because it's backwards-compatible.
If you're trying to understand how optionals work, you might find my tutorial on Swift optionals useful.
I have been doing Swift programming for a few months now and I have always been curious about this...
Is there an advantage to telling the Swift compiler the type of an object in its declaration?
I.e.let image: UIImage = UIImage()
Compared to NOT telling the compiler and having it infer the type at runtime. I.e let image = UIImage()
I would think it would be more efficient to tell the compiler the object type instead of having it infer the type. I know this question appeals to Objective-C syntax as well, so I'll add that in the tags.
There’s zero runtime efficiency difference between the two. During compilation, Swift is inferring the type and writing it in for you. But once compiled, the two statements are identical.
It’s purely a question of readability and, occasionally, compiler efficiency.
Readability because in the statement let image: UIImage = UIImage(), the double appearance of UIImage is just clutter. And in cases of more complex types, it’s pretty much essential – no-one wants to write let keys: LazyForwardCollection<MapCollectionView<Dictionary<String, Int>, String>> = dict.keys when they can write let keys = dict.keys.
Compiler efficiency because occasionally you’ll find that a particularly ambiguous type (literals of literals are notorious for this) where lots of overloads need to be resolved can compile a lot faster if you explicitly name the type on the left-hand side. But this is just a question of how fast it compiles, not how fast it runs once it has compiled.
From Swift Documentation:
It is rare that you need to write type annotations in practice. If you provide an initial value for a constant or variable at the point that it is defined, Swift can almost always infer the type to be used for that constant or variable, as described in Type Safety and Type Inference
So It doesn't matter if you declare instance type or not.
I was wondering the difference of using and not using type annotations(var a: Int = 1 vs var a = 1) in Swift, so I read Apple's The Swift Programming Language.
However, it only says:
You can provide a type annotation when you declare a constant or variable, to be clear about the kind of values the constant or variable can store.
and
It is rare that you need to write type annotations in practice. If you provide an initial value for a constant or variable at the point that it is defined, Swift can almost always infer the type to be used for that constant or variable
It doesn't mention the pros and cons.
It's obviously that using type annotations makes code clear and self-explanatory, whereas not using it is easier to write the code.
Nonetheless, I'd like to know if there are any other reasons(for example, from the perspective of performance or compiler) that I should or should not use type annotations in general.
It is entirely syntactic so as long as you give the compiler enough information to infer the correct type the affect and performance at run time is exactly the same.
Edit: missed your reference to the compiler - I cannot see it having any significant impact on compile times either as it needs to evaluate your assignment expression and check type compatibility anyway.
Assume you have a variety of number or int based variables that you want to be initialized to some default value. But using 0 could be problematic because 0 is meaningful and could have side affects.
Are there any conventions around this?
I have been working in Actionscript lately and have a variety of value objects with optional parameters so for most variables I set null but for numbers or ints I can't use null. An example:
package com.website.app.model.vo
{
public class MyValueObject
{
public function MyValueObject (
_id:String=null,
_amount:Number=0,
_isPurchased:Boolean=false
)
{ // Constructor
if( _id != null ) this.id = _id;
if( _amount != 0 ) this.amount = _amount;
if( _isPurchased != false ) this.isPurchased = _isPurchased;
}
public var id:String;
public var amount:Number;
public var isPurchased:Boolean;
}
}
The difficulty is that using 0 in the above code might be problematic if the value is not ever changed from its initial value. It is easy to detect if a variable has a null value. But detecting 0 may not be so easy because 0 might be a legitimate value. I want to set a default value to make the parameter optional but I also want to later detect in my code if the value was changed from its default without hard to debug side affects.
I suppose I could use something like -1 for a value. I was wondering if there are any well known coding conventions for this kind of thing? I suppose it depends on the nature of the variable and the data.
This is first my stack overflow question. Hopefully the gist of my question makes sense.
A lot of debuggers will use 0xdeadbeef for initializing registers. I always get a chuckle when I see that.
But, in all honesty, your question contains its own answer - use a value that your variable is not ever expected to become. It doesn't matter what the value is.
Since you asked in a comment I'll talk a little bit about C and C++. For efficiency reasons local variables and allocated memory are not initialized by default. But debug builds often do this to help catch errors. A common value used is 0xcdcdcdcd which is reasonably unlikely. It has the high bit set and is either a rather large unsigned or rather large negative signed number. As a pointer address it is odd which will cause an alignment exception if used on anything but a char (but not on X86). It has no special meaning as a 32 bit floating point number so it isn't a perfect choice.
Occasionally you'll see a partially aligned value in a variable such as 0xcdcd0000 or 0x0000cdcd. These can be treated as suspcious at the very least.
Sometimes different values will be used depending on the allocation area of library. That gives you a clue where a bad value may have originated (i.e., it itself wasn't initialized but it was copied from an unititialized value).
The ideal value would be invalid no matter what alignment you read from memory and is invalid over all primitive types. It also should look suspicious to a human so even if they do not know the convention they can suspect something is a foot. That's why 0xdeadbeef can be a good choice because the (hex viewing) programmer will recognize that as the work of a human and not random chance. Note also that it is odd and has the high bit set so it has that going for it.
The value -1 is often traditionally used as an "out of range" or "invalid" value to indicate failure or non-initialised data. Then again, that goes right down the pan if -1 is a semantically valid value for the variable...or you're using an unsigned type.
You seem to like null (and for a good reason), so why not just use it throughout?
In ActionScript you can only assign Number.NaN to variables that are typed Number, not int or uint.
That being said, because AS3 does not support named arguments you can always look at the arguments array (it's a built-in array that all functions have, unless you use the ...rest construct). If that array's length is less than the position of your numeric argument you know it wasn't passed in.
I often use a maximum value for this. As you say, zero often is a valid value. Generally max-int, while theoretically valid, is safe to exclude. But not always; be careful.
I like 0xD15EA5ED, it's similar to 0xDEADBEEF but is usually more accurate when debugging.